title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 8. CloudSourcesService | Chapter 8. CloudSourcesService 8.1. UpdateCloudSource PUT /v1/cloud-sources/{cloudSource.id} UpdateCloudSource creates or replaces a cloud source. 8.1.1. Description 8.1.2. Parameters 8.1.2.1. Path Parameters Name Description Required Default Pattern cloudSource.id X null 8.1.2.2. Body Parameter Name Description Required Default Pattern body CloudSourcesServiceUpdateCloudSourceBody X 8.1.3. Return Type Object 8.1.4. Content Type application/json 8.1.5. Responses Table 8.1. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 8.1.6. Samples 8.1.7. Common object reference 8.1.7.1. CloudSourcesServiceUpdateCloudSourceBody Field Name Required Nullable Type Description Format cloudSource CloudSourcesServiceUpdateCloudSourceBodyCloudSource updateCredentials Boolean If true, cloud_source must include valid credentials. If false, the resource must already exist and credentials in cloud_source are ignored. 8.1.7.2. CloudSourcesServiceUpdateCloudSourceBodyCloudSource CloudSource is an integration which provides a source for discovered clusters. Field Name Required Nullable Type Description Format name String type V1CloudSourceType TYPE_UNSPECIFIED, TYPE_PALADIN_CLOUD, TYPE_OCM, credentials V1CloudSourceCredentials skipTestIntegration Boolean paladinCloud V1PaladinCloudConfig ocm V1OCMConfig 8.1.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 8.1.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 8.1.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 8.1.7.5. V1CloudSourceCredentials Field Name Required Nullable Type Description Format secret String Used for single-valued authentication via long-lived tokens. clientId String Used for client authentication in combination with client_secret. clientSecret String Used for client authentication in combination with client_id. 8.1.7.6. V1CloudSourceType Enum Values TYPE_UNSPECIFIED TYPE_PALADIN_CLOUD TYPE_OCM 8.1.7.7. V1OCMConfig OCMConfig provides information required to fetch discovered clusters from the OpenShift cluster manager. Field Name Required Nullable Type Description Format endpoint String 8.1.7.8. V1PaladinCloudConfig PaladinCloudConfig provides information required to fetch discovered clusters from Paladin Cloud. Field Name Required Nullable Type Description Format endpoint String 8.2. ListCloudSources GET /v1/cloud-sources ListCloudSources returns the list of cloud sources after filtered by requested fields. 8.2.1. Description 8.2.2. Parameters 8.2.2.1. Query Parameters Name Description Required Default Pattern pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null filter.names Matches cloud sources based on their name. String - null filter.types Matches cloud sources based on their type. String - null 8.2.3. Return Type V1ListCloudSourcesResponse 8.2.4. Content Type application/json 8.2.5. Responses Table 8.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListCloudSourcesResponse 0 An unexpected error response. GooglerpcStatus 8.2.6. Samples 8.2.7. Common object reference 8.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 8.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 8.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 8.2.7.3. V1CloudSource CloudSource is an integration which provides a source for discovered clusters. Field Name Required Nullable Type Description Format id String name String type V1CloudSourceType TYPE_UNSPECIFIED, TYPE_PALADIN_CLOUD, TYPE_OCM, credentials V1CloudSourceCredentials skipTestIntegration Boolean paladinCloud V1PaladinCloudConfig ocm V1OCMConfig 8.2.7.4. V1CloudSourceCredentials Field Name Required Nullable Type Description Format secret String Used for single-valued authentication via long-lived tokens. clientId String Used for client authentication in combination with client_secret. clientSecret String Used for client authentication in combination with client_id. 8.2.7.5. V1CloudSourceType Enum Values TYPE_UNSPECIFIED TYPE_PALADIN_CLOUD TYPE_OCM 8.2.7.6. V1ListCloudSourcesResponse Field Name Required Nullable Type Description Format cloudSources List of V1CloudSource 8.2.7.7. V1OCMConfig OCMConfig provides information required to fetch discovered clusters from the OpenShift cluster manager. Field Name Required Nullable Type Description Format endpoint String 8.2.7.8. V1PaladinCloudConfig PaladinCloudConfig provides information required to fetch discovered clusters from Paladin Cloud. Field Name Required Nullable Type Description Format endpoint String 8.3. DeleteCloudSource DELETE /v1/cloud-sources/{id} DeleteCloudSource removes a cloud source. 8.3.1. Description 8.3.2. Parameters 8.3.2.1. Path Parameters Name Description Required Default Pattern id X null 8.3.3. Return Type Object 8.3.4. Content Type application/json 8.3.5. Responses Table 8.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 8.3.6. Samples 8.3.7. Common object reference 8.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 8.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 8.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 8.4. GetCloudSource GET /v1/cloud-sources/{id} GetCloudSource retrieves a cloud source by ID. 8.4.1. Description 8.4.2. Parameters 8.4.2.1. Path Parameters Name Description Required Default Pattern id X null 8.4.3. Return Type V1GetCloudSourceResponse 8.4.4. Content Type application/json 8.4.5. Responses Table 8.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetCloudSourceResponse 0 An unexpected error response. GooglerpcStatus 8.4.6. Samples 8.4.7. Common object reference 8.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 8.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 8.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 8.4.7.3. V1CloudSource CloudSource is an integration which provides a source for discovered clusters. Field Name Required Nullable Type Description Format id String name String type V1CloudSourceType TYPE_UNSPECIFIED, TYPE_PALADIN_CLOUD, TYPE_OCM, credentials V1CloudSourceCredentials skipTestIntegration Boolean paladinCloud V1PaladinCloudConfig ocm V1OCMConfig 8.4.7.4. V1CloudSourceCredentials Field Name Required Nullable Type Description Format secret String Used for single-valued authentication via long-lived tokens. clientId String Used for client authentication in combination with client_secret. clientSecret String Used for client authentication in combination with client_id. 8.4.7.5. V1CloudSourceType Enum Values TYPE_UNSPECIFIED TYPE_PALADIN_CLOUD TYPE_OCM 8.4.7.6. V1GetCloudSourceResponse Field Name Required Nullable Type Description Format cloudSource V1CloudSource 8.4.7.7. V1OCMConfig OCMConfig provides information required to fetch discovered clusters from the OpenShift cluster manager. Field Name Required Nullable Type Description Format endpoint String 8.4.7.8. V1PaladinCloudConfig PaladinCloudConfig provides information required to fetch discovered clusters from Paladin Cloud. Field Name Required Nullable Type Description Format endpoint String 8.5. CreateCloudSource POST /v1/cloud-sources CreateCloudSource creates a cloud source. 8.5.1. Description 8.5.2. Parameters 8.5.2.1. Body Parameter Name Description Required Default Pattern body V1CreateCloudSourceRequest X 8.5.3. Return Type V1CreateCloudSourceResponse 8.5.4. Content Type application/json 8.5.5. Responses Table 8.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1CreateCloudSourceResponse 0 An unexpected error response. GooglerpcStatus 8.5.6. Samples 8.5.7. Common object reference 8.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 8.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 8.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 8.5.7.3. V1CloudSource CloudSource is an integration which provides a source for discovered clusters. Field Name Required Nullable Type Description Format id String name String type V1CloudSourceType TYPE_UNSPECIFIED, TYPE_PALADIN_CLOUD, TYPE_OCM, credentials V1CloudSourceCredentials skipTestIntegration Boolean paladinCloud V1PaladinCloudConfig ocm V1OCMConfig 8.5.7.4. V1CloudSourceCredentials Field Name Required Nullable Type Description Format secret String Used for single-valued authentication via long-lived tokens. clientId String Used for client authentication in combination with client_secret. clientSecret String Used for client authentication in combination with client_id. 8.5.7.5. V1CloudSourceType Enum Values TYPE_UNSPECIFIED TYPE_PALADIN_CLOUD TYPE_OCM 8.5.7.6. V1CreateCloudSourceRequest Field Name Required Nullable Type Description Format cloudSource V1CloudSource 8.5.7.7. V1CreateCloudSourceResponse Field Name Required Nullable Type Description Format cloudSource V1CloudSource 8.5.7.8. V1OCMConfig OCMConfig provides information required to fetch discovered clusters from the OpenShift cluster manager. Field Name Required Nullable Type Description Format endpoint String 8.5.7.9. V1PaladinCloudConfig PaladinCloudConfig provides information required to fetch discovered clusters from Paladin Cloud. Field Name Required Nullable Type Description Format endpoint String 8.6. TestCloudSource POST /v1/cloud-sources/test TestCloudSource tests a cloud source. 8.6.1. Description 8.6.2. Parameters 8.6.2.1. Body Parameter Name Description Required Default Pattern body V1TestCloudSourceRequest X 8.6.3. Return Type Object 8.6.4. Content Type application/json 8.6.5. Responses Table 8.6. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 8.6.6. Samples 8.6.7. Common object reference 8.6.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 8.6.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 8.6.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 8.6.7.3. V1CloudSource CloudSource is an integration which provides a source for discovered clusters. Field Name Required Nullable Type Description Format id String name String type V1CloudSourceType TYPE_UNSPECIFIED, TYPE_PALADIN_CLOUD, TYPE_OCM, credentials V1CloudSourceCredentials skipTestIntegration Boolean paladinCloud V1PaladinCloudConfig ocm V1OCMConfig 8.6.7.4. V1CloudSourceCredentials Field Name Required Nullable Type Description Format secret String Used for single-valued authentication via long-lived tokens. clientId String Used for client authentication in combination with client_secret. clientSecret String Used for client authentication in combination with client_id. 8.6.7.5. V1CloudSourceType Enum Values TYPE_UNSPECIFIED TYPE_PALADIN_CLOUD TYPE_OCM 8.6.7.6. V1OCMConfig OCMConfig provides information required to fetch discovered clusters from the OpenShift cluster manager. Field Name Required Nullable Type Description Format endpoint String 8.6.7.7. V1PaladinCloudConfig PaladinCloudConfig provides information required to fetch discovered clusters from Paladin Cloud. Field Name Required Nullable Type Description Format endpoint String 8.6.7.8. V1TestCloudSourceRequest Field Name Required Nullable Type Description Format cloudSource V1CloudSource updateCredentials Boolean If true, cloud_source must include valid credentials. If false, the resource must already exist and credentials in cloud_source are ignored. 8.7. CountCloudSources GET /v1/count/cloud-sources CountCloudSources returns the number of cloud sources after filtering by requested fields. 8.7.1. Description 8.7.2. Parameters 8.7.2.1. Query Parameters Name Description Required Default Pattern filter.names Matches cloud sources based on their name. String - null filter.types Matches cloud sources based on their type. String - null 8.7.3. Return Type V1CountCloudSourcesResponse 8.7.4. Content Type application/json 8.7.5. Responses Table 8.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountCloudSourcesResponse 0 An unexpected error response. GooglerpcStatus 8.7.6. Samples 8.7.7. Common object reference 8.7.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 8.7.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 8.7.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 8.7.7.3. V1CountCloudSourcesResponse Field Name Required Nullable Type Description Format count Integer int32 | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/cloudsourcesservice |
Chapter 2. CatalogSource [operators.coreos.com/v1alpha1] | Chapter 2. CatalogSource [operators.coreos.com/v1alpha1] Description CatalogSource is a repository of CSVs, CRDs, and operator packages. Type object Required metadata spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Required sourceType Property Type Description address string Address is a host that OLM can use to connect to a pre-existing registry. Format: <registry-host or ip>:<port> Only used when SourceType = SourceTypeGrpc. Ignored when the Image field is set. configMap string ConfigMap is the name of the ConfigMap to be used to back a configmap-server registry. Only used when SourceType = SourceTypeConfigmap or SourceTypeInternal. description string displayName string Metadata grpcPodConfig object GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. icon object image string Image is an operator-registry container image to instantiate a registry-server with. Only used when SourceType = SourceTypeGrpc. If present, the address field is ignored. priority integer Priority field assigns a weight to the catalog source to prioritize them so that it can be consumed by the dependency resolver. Usage: Higher weight indicates that this catalog source is preferred over lower weighted catalog sources during dependency resolution. The range of the priority value can go from positive to negative in the range of int32. The default value to a catalog source with unassigned priority would be 0. The catalog source with the same priority values will be ranked lexicographically based on its name. publisher string runAsRoot boolean RunAsRoot allows admins to indicate that they wish to run the CatalogSource pod in a privileged pod as root. This should only be enabled when running older catalog images which could not be run as non-root. secrets array (string) Secrets represent set of secrets that can be used to access the contents of the catalog. It is best to keep this list small, since each will need to be tried for every catalog entry. sourceType string SourceType is the type of source updateStrategy object UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type 2.1.2. .spec.grpcPodConfig Description GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. Type object Property Type Description affinity object Affinity is the catalog source's pod's affinity. memoryTarget integer-or-string MemoryTarget configures the USDGOMEMLIMIT value for the gRPC catalog Pod. This is a soft memory limit for the server, which the runtime will attempt to meet but makes no guarantees that it will do so. If this value is set, the Pod will have the following modifications made to the container running the server: - the USDGOMEMLIMIT environment variable will be set to this value in bytes - the memory request will be set to this value - the memory limit will be set to 200% of this value This field should be set if it's desired to reduce the footprint of a catalog server as much as possible, or if a catalog being served is very large and needs more than the default allocation. If your index image has a file- system cache, determine a good approximation for this value by doubling the size of the package cache at /tmp/cache/cache/packages.json in the index image. This field is best-effort; if unset, no default will be used and no Pod memory limit or USDGOMEMLIMIT value will be set. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. priorityClassName string If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. securityContextConfig string SecurityContextConfig can be one of legacy or restricted . The CatalogSource's pod is either injected with the right pod.spec.securityContext and pod.spec.container[*].securityContext values to allow the pod to run in Pod Security Admission (PSA) restricted mode, or doesn't set these values at all, in which case the pod can only be run in PSA baseline or privileged namespaces. Currently if the SecurityContextConfig is unspecified, the default value of legacy is used. Specifying a value other than legacy or restricted result in a validation error. When using older catalog images, which could not be run in restricted mode, the SecurityContextConfig should be set to legacy . In a future version will the default will be set to restricted , catalog maintainers should rebuild their catalogs with a version of opm that supports running catalogSource pods in restricted mode to prepare for these changes. More information about PSA can be found here: https://kubernetes.io/docs/concepts/security/pod-security-admission/' tolerations array Tolerations are the catalog source's pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 2.1.3. .spec.grpcPodConfig.affinity Description Affinity is the catalog source's pod's affinity. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 2.1.4. .spec.grpcPodConfig.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 2.1.5. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 2.1.6. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 2.1.7. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.8. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.9. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.10. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 2.1.11. .spec.grpcPodConfig.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.12. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 2.1.13. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 2.1.14. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.15. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.16. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.17. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 2.1.18. .spec.grpcPodConfig.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.19. .spec.grpcPodConfig.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.20. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.21. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.22. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.23. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.24. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.25. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.26. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.27. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.28. .spec.grpcPodConfig.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.29. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.30. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.31. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.32. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.33. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.34. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.35. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.36. .spec.grpcPodConfig.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.37. .spec.grpcPodConfig.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.38. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.39. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.40. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.41. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.42. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.43. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.44. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.45. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.46. .spec.grpcPodConfig.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.47. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.48. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.49. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.50. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.51. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.52. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.53. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.54. .spec.grpcPodConfig.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.55. .spec.grpcPodConfig.tolerations Description Tolerations are the catalog source's pod's tolerations. Type array 2.1.56. .spec.grpcPodConfig.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.57. .spec.icon Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 2.1.58. .spec.updateStrategy Description UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type Type object Property Type Description registryPoll object 2.1.59. .spec.updateStrategy.registryPoll Description Type object Property Type Description interval string Interval is used to determine the time interval between checks of the latest catalog source version. The catalog operator polls to see if a new version of the catalog source is available. If available, the latest image is pulled and gRPC traffic is directed to the latest catalog source. 2.1.60. .status Description Type object Property Type Description conditions array Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } configMapReference object connectionState object latestImageRegistryPoll string The last time the CatalogSource image registry has been polled to ensure the image is up-to-date message string A human readable message indicating details about why the CatalogSource is in this condition. reason string Reason is the reason the CatalogSource was transitioned to its current state. registryService object 2.1.61. .status.conditions Description Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. Type array 2.1.62. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.1.63. .status.configMapReference Description Type object Required name namespace Property Type Description lastUpdateTime string name string namespace string resourceVersion string uid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 2.1.64. .status.connectionState Description Type object Required lastObservedState Property Type Description address string lastConnect string lastObservedState string 2.1.65. .status.registryService Description Type object Property Type Description createdAt string port string protocol string serviceName string serviceNamespace string 2.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/catalogsources GET : list objects of kind CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources DELETE : delete collection of CatalogSource GET : list objects of kind CatalogSource POST : create a CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} DELETE : delete a CatalogSource GET : read the specified CatalogSource PATCH : partially update the specified CatalogSource PUT : replace the specified CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status GET : read status of the specified CatalogSource PATCH : partially update status of the specified CatalogSource PUT : replace status of the specified CatalogSource 2.2.1. /apis/operators.coreos.com/v1alpha1/catalogsources Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind CatalogSource Table 2.2. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty 2.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources Table 2.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CatalogSource Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CatalogSource Table 2.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty HTTP method POST Description create a CatalogSource Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. Body parameters Parameter Type Description body CatalogSource schema Table 2.11. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 202 - Accepted CatalogSource schema 401 - Unauthorized Empty 2.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the CatalogSource namespace string object name and auth scope, such as for teams and projects Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CatalogSource Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CatalogSource Table 2.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.18. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CatalogSource Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.20. Body parameters Parameter Type Description body Patch schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CatalogSource Table 2.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.23. Body parameters Parameter Type Description body CatalogSource schema Table 2.24. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty 2.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status Table 2.25. Global path parameters Parameter Type Description name string name of the CatalogSource namespace string object name and auth scope, such as for teams and projects Table 2.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CatalogSource Table 2.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.28. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CatalogSource Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.30. Body parameters Parameter Type Description body Patch schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CatalogSource Table 2.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.33. Body parameters Parameter Type Description body CatalogSource schema Table 2.34. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operatorhub_apis/catalogsource-operators-coreos-com-v1alpha1 |
Chapter 7. Creating and managing topics | Chapter 7. Creating and managing topics Messages in Kafka are always sent to or received from a topic. This chapter describes how to create and manage Kafka topics. 7.1. Partitions and replicas Messages in Kafka are always sent to or received from a topic. A topic is always split into one or more partitions. Partitions act as shards. That means that every message sent by a producer is always written only into a single partition. Thanks to the sharding of messages into different partitions, topics are easy to scale horizontally. Each partition can have one or more replicas, which will be stored on different brokers in the cluster. When creating a topic you can configure the number of replicas using the replication factor . Replication factor defines the number of copies which will be held within the cluster. One of the replicas for given partition will be elected as a leader. The leader replica will be used by the producers to send new messages and by the consumers to consume messages. The other replicas will be follower replicas. The followers replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so the load is well balanced within the cluster. Note The replication factor determines the number of replicas including the leader and the followers. For example, if you set the replication factor to 3 , then there will one leader and two follower replicas. 7.2. Message retention The message retention policy defines how long the messages will be stored on the Kafka brokers. It can be defined based on time, partition size or both. For example, you can define that the messages should be kept: For 7 days Until the parition has 1GB of messages. Once the limit is reached, the oldest messages will be removed. For 7 days or until the 1GB limit has been reached. Whatever limit comes first will be used. Warning Kafka brokers store messages in log segments. The messages which are past their retention policy will be deleted only when a new log segment is created. New log segments are created when the log segment exceeds the configured log segment size. Additionally, users can request new segments to be created periodically. Additionally, Kafka brokers support a compacting policy. For a topic with the compacted policy, the broker will always keep only the last message for each key. The older messages with the same key will be removed from the partition. Because compacting is a periodically executed action, it does not happen immediately when the new message with the same key are sent to the partition. Instead it might take some time until the older messages are removed. For more information about the message retention configuration options, see Section 7.5, "Topic configuration" . 7.3. Topic auto-creation When a producer or consumer tries to send messages to or receive messages from a topic that does not exist, Kafka will, by default, automatically create that topic. This behavior is controlled by the auto.create.topics.enable configuration property which is set to true by default. To disable it, set auto.create.topics.enable to false in the Kafka broker configuration file: 7.4. Topic deletion Kafka offers the possibility to disable deletion of topics. This is configured through the delete.topic.enable property, which is set to true by default (that is, deleting topics is possible). When this property is set to false it will be not possible to delete topics and all attempts to delete topic will return success but the topic will not be deleted. 7.5. Topic configuration Auto-created topics will use the default topic configuration which can be specified in the broker properties file. However, when creating topics manually, their configuration can be specified at creation time. It is also possible to change a topic's configuration after it has been created. The main topic configuration options for manually created topics are: cleanup.policy Configures the retention policy to delete or compact . The delete policy will delete old records. The compact policy will enable log compaction. The default value is delete . For more information about log compaction, see Kafka website . compression.type Specifies the compression which is used for stored messages. Valid values are gzip , snappy , lz4 , uncompressed (no compression) and producer (retain the compression codec used by the producer). The default value is producer . max.message.bytes The maximum size of a batch of messages allowed by the Kafka broker, in bytes. The default value is 1000012 . min.insync.replicas The minimum number of replicas which must be in sync for a write to be considered successful. The default value is 1 . retention.ms Maximum number of milliseconds for which log segments will be retained. Log segments older than this value will be deleted. The default value is 604800000 (7 days). retention.bytes The maximum number of bytes a partition will retain. Once the partition size grows over this limit, the oldest log segments will be deleted. Value of -1 indicates no limit. The default value is -1 . segment.bytes The maximum file size of a single commit log segment file in bytes. When the segment reaches its size, a new segment will be started. The default value is 1073741824 bytes (1 gibibyte). The defaults for auto-created topics can be specified in the Kafka broker configuration using similar options: log.cleanup.policy See cleanup.policy above. compression.type See compression.type above. message.max.bytes See max.message.bytes above. min.insync.replicas See min.insync.replicas above. log.retention.ms See retention.ms above. log.retention.bytes See retention.bytes above. log.segment.bytes See segment.bytes above. default.replication.factor Default replication factor for automatically created topics. Default value is 1 . num.partitions Default number of partitions for automatically created topics. Default value is 1 . 7.6. Internal topics Internal topics are created and used internally by the Kafka brokers and clients. Kafka has several internal topics. These are used to store consumer offsets ( __consumer_offsets ) or transaction state ( __transaction_state ). These topics can be configured using dedicated Kafka broker configuration options starting with prefix offsets.topic. and transaction.state.log. . The most important configuration options are: offsets.topic.replication.factor Number of replicas for __consumer_offsets topic. The default value is 3 . offsets.topic.num.partitions Number of partitions for __consumer_offsets topic. The default value is 50 . transaction.state.log.replication.factor Number of replicas for __transaction_state topic. The default value is 3 . transaction.state.log.num.partitions Number of partitions for __transaction_state topic. The default value is 50 . transaction.state.log.min.isr Minimum number of replicas that must acknowledge a write to __transaction_state topic to be considered successful. If this minimum cannot be met, then the producer will fail with an exception. The default value is 2 . 7.7. Creating a topic Use the kafka-topics.sh tool to manage topics. kafka-topics.sh is part of the AMQ Streams distribution and is found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Creating a topic Create a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. The new topic to be created in the --create option. Topic name in the --topic option. The number of partitions in the --partitions option. Topic replication factor in the --replication-factor option. You can also override some of the default topic configuration options using the option --config . This option can be used multiple times to override different options. /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2> Example of the command to create a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2 Verify that the topic exists using kafka-topics.sh . /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <TopicName> Example of the command to describe a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic Additional resources Topic configuration 7.8. Listing and describing topics The kafka-topics.sh tool can be used to list and describe topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Describing a topic Describe a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. Use the --describe option to specify that you want to describe a topic. Topic name must be specified in the --topic option. When the --topic option is omitted, it will describe all available topics. /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <TopicName> Example of the command to describe a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic The command lists all partitions and replicas which belong to this topic. It also lists all topic configuration options. Additional resources Topic configuration Creating a topic 7.9. Modifying a topic configuration The kafka-configs.sh tool can be used to modify topic configurations. kafka-configs.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Modify topic configuration Use the kafka-configs.sh tool to get the current configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --describe option to get the current configuration. /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name <TopicName> --describe Example of the command to get configuration of a topic named mytopic /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe Use the kafka-configs.sh tool to change the configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --alter option to modify the current configuration. Specify the options you want to add or change in the option --add-config . /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name <TopicName> --alter --add-config <Option> = <Value> Example of the command to change configuration of a topic named mytopic /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1 Use the kafka-configs.sh tool to delete an existing configuration option. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --delete-config option to remove existing configuration option. Specify the options you want to remove in the option --remove-config . /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name <TopicName> --alter --delete-config <Option> Example of the command to change configuration of a topic named mytopic /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas Additional resources Topic configuration Creating a topic 7.10. Deleting a topic The kafka-topics.sh tool can be used to manage topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Deleting a topic Delete a topic using the kafka-topics.sh utility. Host and port of the Kafka broker in the --bootstrap-server option. Use the --delete option to specify that an existing topic should be deleted. Topic name must be specified in the --topic option. /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --delete --topic <TopicName> Example of the command to create a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic Verify that the topic was deleted using kafka-topics.sh . /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list Example of the command to list all topics /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list Additional resources Creating a topic | [
"auto.create.topics.enable=false",
"delete.topic.enable=false",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <TopicName>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <TopicName>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name <TopicName> --describe",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name <TopicName> --alter --add-config <Option> = <Value>",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name <TopicName> --alter --delete-config <Option>",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --delete --topic <TopicName>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --list",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/topics-str |
Chapter 13. Volume Snapshots | Chapter 13. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 13.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 13.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 13.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-snapshots_osp |
Chapter 1. Role APIs | Chapter 1. Role APIs 1.1. ClusterRoleBinding [authorization.openshift.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference any ClusterRole in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. ClusterRoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ClusterRole [authorization.openshift.io/v1] Description ClusterRole is a logical grouping of PolicyRules that can be referenced as a unit by ClusterRoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. RoleBindingRestriction [authorization.openshift.io/v1] Description RoleBindingRestriction is an object that can be matched against a subject (user, group, or service account) to determine whether rolebindings on that subject are allowed in the namespace to which the RoleBindingRestriction belongs. If any one of those RoleBindingRestriction objects matches a subject, rolebindings on that subject in the namespace are allowed. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. RoleBinding [authorization.openshift.io/v1] Description RoleBinding references a Role, but not contain it. It can reference any Role in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. Role [authorization.openshift.io/v1] Description Role is a logical grouping of PolicyRules that can be referenced as a unit by RoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/role_apis/role-apis |
Chapter 21. Executing the MortgageApprovalProcess process application | Chapter 21. Executing the MortgageApprovalProcess process application Now that you have deployed the project, you can execute the project's defined functionality. For this tutorial you input data into a mortgage application form acting as the mortgage broker. The MortgageApprovalProcess business process runs and determines whether or not the applicant has offered an acceptable down payment based on the decision rules that you defined earlier. The business process either ends the rule testing or requests that the applicant increase the down payment to proceed. If the application passes the business rule testing, the bank's approver reviews the application and either approve or deny the loan. Prerequisites KIE Server is deployed and connected to Business Central. The Mortgage_Process application has been deployed. The users working on the tasks are members of the following groups and roles: approver group: For the Qualify task broker group: For the Correct Data and Increase Down Payment tasks manager role: For the Final Approval task Procedure Log in to Red Hat Process Automation Manager as Bill (the broker) and click Menu Manage Process Definitions . Click the three vertical dots in the Actions column and select Start to start to open the Application form and input the following values in to the form fields: Down Payment : 30000 Years of amortization : 10 Name : Ivo Annual Income : 60000 SSN : 123456789 Age of property : 8 Address of property : Brno Locale : Rural Property Sale Price : 50000 Click Submit to start a new process instance. After starting the process instance, the Instance Details view opens. Click the Diagram tab to view the process flow within the process diagram. The state of the process is highlighted as it moves through each task. Log out of Business Central and log back in as Katy . Click Menu Track Task Inbox . This takes you to the Qualify form. Click the three vertical dots in the Actions column and select and click Claim . The Qualify task Status is now Reserved . Click the Qualify task row to open and review the task information. Click Claim and then Start at the bottom of the form. The application form is now active for approval or denial. To approve the application, select Is mortgage application in limit? and click Complete . In the Task Inbox , click anywhere in the Final Approval row to open the Final Approval task. In the Final Approval row, click the three vertical dots in the Actions column and click Claim . Click anywhere in the Final Approval row to open the Final Approval task. Click Start at the bottom of the form. Note that the Inlimit check box is selected to reflect that that application is ready for final approval. Click Complete . Note The Save and Release buttons are only used to either pause the approval process and save the instance if you are waiting on a field value, or to release the task for another user to modify. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/executing_processes |
16.9.3. Running virt-inspector | 16.9.3. Running virt-inspector You can run virt-inspector against any disk image or libvirt guest virtual machine as shown in the following example: Or as shown here: The result will be an XML report ( report.xml ). The main components of the XML file are a top-level <operatingsystems> element containing usually a single <operatingsystem> element, similar to the following: Processing these reports is best done using W3C standard XPath queries. Red Hat Enterprise Linux 6 comes with a command line program ( xpath ) which can be used for simple instances; however, for long-term and advanced usage, you should consider using an XPath library along with your favorite programming language. As an example, you can list out all file system devices using the following XPath query: Or list the names of all applications installed by entering: | [
"virt-inspector --xml disk.img > report.xml",
"virt-inspector --xml GuestName > report.xml",
"<operatingsystems> <operatingsystem> <!-- the type of operating system and Linux distribution --> <name>linux</name> <distro>rhel</distro> <!-- the name, version and architecture --> <product_name>Red Hat Enterprise Linux Server release 6.4 </product_name> <major_version>6</major_version> <minor_version>4</minor_version> <package_format>rpm</package_format> <package_management>yum</package_management> <root>/dev/VolGroup/lv_root</root> <!-- how the filesystems would be mounted when live --> <mountpoints> <mountpoint dev=\"/dev/VolGroup/lv_root\">/</mountpoint> <mountpoint dev=\"/dev/sda1\">/boot</mountpoint> <mountpoint dev=\"/dev/VolGroup/lv_swap\">swap</mountpoint> </mountpoints> < !-- filesystems--> <filesystem dev=\"/dev/VolGroup/lv_root\"> <label></label> <uuid>b24d9161-5613-4ab8-8649-f27a8a8068d3</uuid> <type>ext4</type> <content>linux-root</content> <spec>/dev/mapper/VolGroup-lv_root</spec> </filesystem> <filesystem dev=\"/dev/VolGroup/lv_swap\"> <type>swap</type> <spec>/dev/mapper/VolGroup-lv_swap</spec> </filesystem> <!-- packages installed --> <applications> <application> <name>firefox</name> <version>3.5.5</version> <release>1.fc12</release> </application> </applications> </operatingsystem> </operatingsystems>",
"virt-inspector --xml GuestName | xpath //filesystem/@dev Found 3 nodes: -- NODE -- dev=\"/dev/sda1\" -- NODE -- dev=\"/dev/vg_f12x64/lv_root\" -- NODE -- dev=\"/dev/vg_f12x64/lv_swap\"",
"virt-inspector --xml GuestName | xpath //application/name [...long list...]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virt-inspector-run |
Eclipse Plugin Guide | Eclipse Plugin Guide Migration Toolkit for Applications 7.1 Identify and resolve migration issues by analyzing your applications with the MTA plugin for Eclipse. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/eclipse_plugin_guide/index |
Chapter 8. Monitoring the Network Observability Operator | Chapter 8. Monitoring the Network Observability Operator You can use the web console to monitor alerts related to the health of the Network Observability Operator. 8.1. Viewing health information You can access metrics about health and resource usage of the Network Observability Operator from the Dashboards page in the web console. A health alert banner that directs you to the dashboard can appear on the Network Traffic and Home pages in the event that an alert is triggered. Alerts are generated in the following cases: The NetObservLokiError alert occurs if the flowlogs-pipeline workload is dropping flows because of Loki errors, such as if the Loki ingestion rate limit has been reached. The NetObservNoFlows alert occurs if no flows are ingested for a certain amount of time. Prerequisites You have the Network Observability Operator installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Dashboards . From the Dashboards dropdown, select Netobserv/Health . Metrics about the health of the Operator are displayed on the page. 8.1.1. Disabling health alerts You can opt out of health alerting by editing the FlowCollector resource: In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Add spec.processor.metrics.disableAlerts to disable health alerts, as in the following YAML sample: 1 You can specify one or a list with both types of alerts to disable. 8.2. Creating Loki rate limit alerts for the NetObserv dashboard You can create custom rules for the Netobserv dashboard metrics to trigger alerts when Loki rate limits have been reached. An example of an alerting rule configuration YAML file is as follows: apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: loki-alerts namespace: openshift-operators-redhat spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: "At any number of requests are responded with the rate limit error code." expr: sum(irate(loki_request_duration_seconds_count{status_code="429"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning Additional resources For more information about creating alerts that you can see on the dashboard, see Creating alerting rules for user-defined projects . | [
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: loki-alerts namespace: openshift-operators-redhat spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: \"At any number of requests are responded with the rate limit error code.\" expr: sum(irate(loki_request_duration_seconds_count{status_code=\"429\"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/network_observability/network-observability-operator-monitoring |
Chapter 2. New Features and Enhancements | Chapter 2. New Features and Enhancements 2.1. Continuous Queries Red Hat JBoss Data Grid (JDG) 6.6 introduces the ability to specify Continuous Queries, in which the result set is kept up to date as entries in the cache change, without having to rerun the query. This feature is available in both Library and Client-Server modes. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/chap-new_features_and_enhancements |
5.2. Configuring 802.1X Security | 5.2. Configuring 802.1X Security 802.1X security is the name of the IEEE standard for port-based Network Access Control ( PNAC ). It is also called WPA Enterprise . 802.1X security is a way of controlling access to a logical network from a physical one. All clients who want to join the logical network must authenticate with the server (a router, for example) using the correct 802.1X authentication method. 802.1X security is most often associated with securing wireless networks (WLANs), but can also be used to prevent intruders with physical access to the network (LAN) from gaining entry. In the past, DHCP servers were configured not to lease IP addresses to unauthorized users, but for various reasons this practice is both impractical and insecure, and thus is no longer recommended. Instead, 802.1X security is used to ensure a logically-secure network through port-based authentication. 802.1X provides a framework for WLAN and LAN access control and serves as an envelope for carrying one of the Extensible Authentication Protocol (EAP) types. An EAP type is a protocol that defines how security is achieved on the network. 5.2.1. Configuring 802.1X Security for Wi-Fi with nmcli Procedure Set the authenticated key-mgmt (key management) protocol. It configures the keying mechanism for a secure wifi connection. See the nm-settings (5) man page for more details on properties. Configure the 802-1x authentication settings. For the Transport Layer Security (TLS) authentication, see the section called "Configuring TLS Settings" . Table 5.1. The 802-1x authentication settings 802-1x authentication setting Name 802-1x.identity Identity 802-1x.ca-cert CA certificate 802-1x.client-cert User certificate 802-1x.private-key Private key 802-1x.private-key-password Private key password For example, to configure WPA2 Enterprise using the EAP-TLS authentication method, apply the following settings: 5.2.2. Configuring 802.1X Security for Wired with nmcli To configure a wired connection using the nmcli tool, follow the same procedure as for a wireless connection, except the 802-11-wireless.ssid and 802-11-wireless-security.key-mgmt settings. 5.2.3. Configuring 802.1X Security for Wi-Fi with a GUI Procedure Open the Network window (see Section 3.4.1, "Connecting to a Network Using the control-center GUI" ). Select a Wireless network interface from the right-hand-side menu. If necessary, set the symbolic power button to ON and check that your hardware switch is on. Either select the connection name of a new connection, or click the gear wheel icon of an existing connection profile, for which you want to configure 802.1X security. In the case of a new connection, complete any authentication steps to complete the connection and then click the gear wheel icon. Select Security . The following configuration options are available: Security None - Do not encrypt the Wi-Fi connection. WEP 40/128-bit Key - Wired Equivalent Privacy (WEP), from the IEEE 802.11 standard. Uses a single pre-shared key (PSK). WEP 128-bit Passphrase - An MD5 hash of the passphrase will be used to derive a WEP key. LEAP - Lightweight Extensible Authentication Protocol, from Cisco Systems. Dynamic WEP (802.1X) - WEP keys are changed dynamically. Use with the section called "Configuring TLS Settings" WPA & WPA2 Personal - Wi-Fi Protected Access (WPA), from the draft IEEE 802.11i standard. A replacement for WEP. Wi-Fi Protected Access II (WPA2), from the 802.11i-2004 standard. Personal mode uses a pre-shared key (WPA-PSK). WPA & WPA2 Enterprise - WPA for use with a RADIUS authentication server to provide IEEE 802.1X network access control. Use with the section called "Configuring TLS Settings" Password Enter the password to be used in the authentication process. From the drop-down menu select one of the following security methods: LEAP , Dynamic WEP (802.1X) , or WPA & WPA2 Enterprise . See the section called "Configuring TLS Settings" for descriptions of which extensible authentication protocol ( EAP ) types correspond to your selection in the Security drop-down menu. 5.2.4. Configuring 802.1X Security for Wired with nm-connection-editor Procedure Enter the nm-connection-editor in a terminal. The Network Connections window appears. Select the ethernet connection you want to edit and click the gear wheel icon, see Section 3.4.6.2, "Configuring a Wired Connection with nm-connection-editor" . Select Security and set the symbolic power button to ON to enable settings configuration. Select from one of following authentication methods: Select TLS for Transport Layer Security and proceed to the section called "Configuring TLS Settings" ; Select FAST for Flexible Authentication through Secure Tunneling and proceed to the section called "Configuring Tunneled TLS Settings" ; Select Tunneled TLS for Tunneled Transport Layer Security , otherwise known as TTLS, or EAP-TTLS and proceed to the section called "Configuring Tunneled TLS Settings" ; Select Protected EAP (PEAP) for Protected Extensible Authentication Protocol and proceed to the section called "Configuring Protected EAP (PEAP) Settings" . Configuring TLS Settings With Transport Layer Security (TLS), the client and server mutually authenticate using the TLS protocol. The server demonstrates that it holds a digital certificate, the client proves its own identity using its client-side certificate, and key information is exchanged. Once authentication is complete, the TLS tunnel is no longer used. Instead, the client and server use the exchanged keys to encrypt data using AES, TKIP or WEP. The fact that certificates must be distributed to all clients who want to authenticate means that the EAP-TLS authentication method is very strong, but also more complicated to set up. Using TLS security requires the overhead of a public key infrastructure (PKI) to manage certificates. The benefit of using TLS security is that a compromised password does not allow access to the (W)LAN: an intruder must also have access to the authenticating client's private key. NetworkManager does not determine the version of TLS supported. NetworkManager gathers the parameters entered by the user and passes them to the daemon, wpa_supplicant , that handles the procedure. It in turn uses OpenSSL to establish the TLS tunnel. OpenSSL itself negotiates the SSL/TLS protocol version. It uses the highest version both ends support. To configure TLS settings, follow the procedure described in Section 5.2.4, "Configuring 802.1X Security for Wired with nm-connection-editor" . The following configuration settings are available: Identity Provide the identity of this server. User certificate Click to browse for, and select, a personal X.509 certificate file encoded with Distinguished Encoding Rules ( DER ) or Privacy Enhanced Mail ( PEM ). CA certificate Click to browse for, and select, an X.509 certificate authority certificate file encoded with Distinguished Encoding Rules ( DER ) or Privacy Enhanced Mail ( PEM ). Private key Click to browse for, and select, a private key file encoded with Distinguished Encoding Rules ( DER ), Privacy Enhanced Mail ( PEM ), or the Personal Information Exchange Syntax Standard ( PKCS #12 ). Private key password Enter the password for the private key in the Private key field. Select Show password to make the password visible as you type it. Configuring FAST Settings To configure FAST settings, follow the procedure described in Section 5.2.4, "Configuring 802.1X Security for Wired with nm-connection-editor" . The following configuration settings are available: Anonymous Identity Provide the identity of this server. PAC provisioning Select the check box to enable and then select from Anonymous , Authenticated , and Both . PAC file Click to browse for, and select, a protected access credential ( PAC ) file. Inner authentication GTC - Generic Token Card. MSCHAPv2 - Microsoft Challenge Handshake Authentication Protocol version 2. Username Enter the user name to be used in the authentication process. Password Enter the password to be used in the authentication process. Configuring Tunneled TLS Settings To configure Tunneled TLS settings, follow the procedure described in Section 5.2.4, "Configuring 802.1X Security for Wired with nm-connection-editor" . The following configuration settings are available: Anonymous identity This value is used as the unencrypted identity. CA certificate Click to browse for, and select, a Certificate Authority's certificate. Inner authentication PAP - Password Authentication Protocol. MSCHAP - Challenge Handshake Authentication Protocol. MSCHAPv2 - Microsoft Challenge Handshake Authentication Protocol version 2. CHAP - Challenge Handshake Authentication Protocol. Username Enter the user name to be used in the authentication process. Password Enter the password to be used in the authentication process. Configuring Protected EAP (PEAP) Settings To configure Protected EAP (PEAP) settings, follow the procedure described in Section 5.2.4, "Configuring 802.1X Security for Wired with nm-connection-editor" . The following configuration settings are available: Anonymous Identity This value is used as the unencrypted identity. CA certificate Click to browse for, and select, a Certificate Authority's certificate. PEAP version The version of Protected EAP to use. Automatic, 0 or 1. Inner authentication MSCHAPv2 - Microsoft Challenge Handshake Authentication Protocol version 2. MD5 - Message Digest 5, a cryptographic hash function. GTC - Generic Token Card. Username Enter the user name to be used in the authentication process. Password Enter the password to be used in the authentication process. | [
"nmcli c add type wifi ifname wlo61s0 con-name 'My Wifi Network' 802-11-wireless.ssid 'My Wifi' 802-11-wireless-security.key-mgmt wpa-eap 802-1x.eap tls 802-1x.identity [email protected] 802-1x.ca-cert /etc/pki/my-wifi/ca.crt 802-1x.client-cert /etc/pki/my-wifi/client.crt 802-1x.private-key /etc/pki/my-wifi/client.key 802-1x.private-key-password s3cr3t",
"~]USD nm-connection-editor"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_802.1x_security |
Chapter 139. KafkaMirrorMaker2Spec schema reference | Chapter 139. KafkaMirrorMaker2Spec schema reference Used in: KafkaMirrorMaker2 Property Property type Description version string The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Kafka Connect group. Defaults to 3 . image string The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. connectCluster string The cluster alias used for Kafka Connect. The value must match the alias of the target Kafka cluster as specified in the spec.clusters configuration. The target Kafka cluster is used by the underlying Kafka Connect framework for its internal topics. clusters KafkaMirrorMaker2ClusterSpec array Kafka clusters for mirroring. mirrors KafkaMirrorMaker2MirrorSpec array Configuration of the MirrorMaker 2 connectors. resources ResourceRequirements The maximum limits for CPU and memory resources and the requested initial resources. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options. logging InlineLogging , ExternalLogging Logging configuration for Kafka Connect. clientRackInitImage string The image of the init container used for initializing the client.rack . rack Rack Configuration of the node label which will be used as the client.rack consumer configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka Connect. template KafkaConnectTemplate Template for Kafka Connect and Kafka Mirror Maker 2 resources. The template allows users to specify how the Pods , Service , and other services are generated. externalConfiguration ExternalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMaker2Spec-reference |
Chapter 4. Connecting to Red Hat Insights through your own proxy | Chapter 4. Connecting to Red Hat Insights through your own proxy You might choose to use your own proxy to act as a gateway between the public Internet and your private network. This is a good security measure to protect your systems from malicious activity. To connect your systems to Red Hat Insights you must add hostnames, ports and allow additional URLs. 4.1. Connecting to Red Hat Insights through your own proxy Note If you are a Red Hat Satellite user, no proxy is required because Satellite serves as a proxy itself. See this article for more information: How to configure Red Hat Satellite 6 with proxy server To connect to Red Hat Insights, include specific hostnames and ports on your proxy's outgoing network. Prerequisites You have at least one active Red Hat Enterprise Linux (RHEL) subscription. You are logged in to the system as root or have sudo permissions. Your system is registered with Red Hat Subscription Manager (RHSM). Procedure You must include the following hostnames and ports on your proxy's outgoing network, to connect to Red Hat Insights: Navigate to your outgoing network configuration and add the following addresses and ports: Add the Red Hat Hybrid Cloud Console URL so that you can manage your account and hosts in the Red Hat Insights Web UI: Add the URL for Single-Sign-On to Red Hat to ensure access to authorization: Each host using your proxy needs the following details added to the /etc/rhsm/rhsm.conf file. Note This information is required for RHSM, Insights client and remote host configuration (rhc). Add your http proxy server's URL: Add the proxy scheme for authorization purposes (http is the default): Add the port for your proxy server: Optional If your proxy requires authentication, add your user name and password for authenticating: Add any domains you want to opt out from the proxy: By default, Insights client uses RHSM's configuration for a proxy. You can edit the insights-client.conf configuration file to change the proxy: Verification step To verify connectivity, open your command line interface (CLI) and run the following command as root: If connectivity is successful, you will see the following output in your CLI: Additional resources Using Red Hat Subscription Manager | [
"https://cert-api.access.redhat.com:443",
"https://cert.cloud.redhat.com:443",
"https://cert.console.redhat.com:443",
"https://console.redhat.com:443",
"https://sso.redhat.com:443",
"proxy_hostname =",
"proxy_scheme = http",
"proxy_port =",
"proxy_user =",
"proxy_password =",
"no_proxy =",
"/etc/insights-client/insights-client.conf",
"insights-client --test-connection --net-debug",
"End API URL Connection Test: SUCCESS Connectivity tests completed successfully See `/var/log/insights-client/insights-client.log` for more details."
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/connecting_to_red_hat_insights_through_insights_proxy/connecting-through-your-own-proxy |
Chapter 6. Configuring Knative broker for Apache Kafka | Chapter 6. Configuring Knative broker for Apache Kafka The Knative broker implementation for Apache Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities. In addition to the Knative Eventing components that are provided as part of a core OpenShift Serverless installation, the KnativeKafka custom resource (CR) can be installed by: Cluster administrators, for OpenShift Container Platform Cluster or dedicated administrators, for Red Hat OpenShift Service on AWS or for OpenShift Dedicated. The KnativeKafka CR provides users with additional options, such as: Kafka source Kafka channel Kafka broker Kafka sink 6.1. Installing Knative broker for Apache Kafka The Knative broker implementation for Apache Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Knative broker for Apache Kafka functionality is available in an OpenShift Serverless installation if you have installed the KnativeKafka custom resource. Prerequisites You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. You have access to a Red Hat AMQ Streams cluster. Install the OpenShift CLI ( oc ) if you want to use the verification steps. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You are logged in to the OpenShift Container Platform web console. Procedure In the Administrator perspective, navigate to Operators Installed Operators . Check that the Project dropdown at the top of the page is set to Project: knative-eventing . In the list of Provided APIs for the OpenShift Serverless Operator, find the Knative Kafka box and click Create Instance . Configure the KnativeKafka object in the Create Knative Kafka page. Important To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the enabled switch for the options you want to use to true . These switches are set to false by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers. Use the form for simpler configurations that do not require full control of KnativeKafka object creation. Edit the YAML for more complex configurations that require full control of KnativeKafka object creation. You can access the YAML by clicking the Edit YAML link on the Create Knative Kafka page. Example KnativeKafka custom resource apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8 logging: level: INFO 9 1 Enables developers to use the KafkaChannel channel type in the cluster. 2 A comma-separated list of bootstrap servers from your AMQ Streams cluster. 3 Enables developers to use the KafkaSource event source type in the cluster. 4 Enables developers to use the Knative broker implementation for Apache Kafka in the cluster. 5 A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster. 6 Defines the number of partitions of the Kafka topics, backed by the Broker objects. The default is 10 . 7 Defines the replication factor of the Kafka topics, backed by the Broker objects. The default is 3 . The replicationFactor value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster. 8 Enables developers to use a Kafka sink in the cluster. 9 Defines the log level of the Kafka data plane. Allowed values are TRACE , DEBUG , INFO , WARN and ERROR . The default value is INFO . Warning Do not use DEBUG or TRACE as the logging level in production environments. The outputs from these logging levels are verbose and can degrade performance. Click Create after you have completed any of the optional configurations for Kafka. You are automatically directed to the Knative Kafka tab where knative-kafka is in the list of resources. Verification Click on the knative-kafka resource in the Knative Kafka tab. You are automatically directed to the Knative Kafka Overview page. View the list of Conditions for the resource and confirm that they have a status of True . If the conditions have a status of Unknown or False , wait a few moments to refresh the page. Check that the Knative broker for Apache Kafka resources have been created: USD oc get pods -n knative-eventing Example output NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s 6.2. Additional resources for Apache Kafka in Knative Eventing: Source for apache Kafka Sink for Apache Kafka Knative broker implementation for Apache Kafka Configuring kube-rbac-proxy for Knative for Apache Kafka | [
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8 logging: level: INFO 9",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/installing_openshift_serverless/serverless-kafka-admin |
Chapter 30. Kubernetes NMState | Chapter 30. Kubernetes NMState 30.1. About the Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server. Important Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power(R), IBM Z(R), IBM(R) LinuxONE, VMware vSphere, and OpenStack installations. Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator. Note The Kubernetes NMState Operator updates the network configuration of a secondary NIC. It cannot update the network configuration of the primary NIC or the br-ex bridge. OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster. Node networking is monitored and updated by the following objects: NodeNetworkState Reports the state of the network on that node. NodeNetworkConfigurationPolicy Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. NodeNetworkConfigurationEnactment Reports the network policies enacted upon each node. 30.1.1. Installing the Kubernetes NMState Operator You can install the Kubernetes NMState Operator by using the web console or the CLI. 30.1.1.1. Installing the Kubernetes NMState Operator by using the web console You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You are logged in as a user with cluster-admin privileges. Procedure Select Operators OperatorHub . In the search field below All Items , enter nmstate and click Enter to search for the Kubernetes NMState Operator. Click on the Kubernetes NMState Operator search result. Click on Install to open the Install Operator window. Click Install to install the Operator. After the Operator finishes installing, click View Operator . Under Provided APIs , click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate . In the Name field of the dialog box, ensure the name of the instance is nmstate. Note The name restriction is a known issue. The instance is a singleton for the entire cluster. Accept the default settings and click Create to create the instance. Summary Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. 30.1.1.2. Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI ( oc) . After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create the nmstate Operator namespace: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF Create the OperatorGroup : USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF Subscribe to the nmstate Operator: USD cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Confirm the ClusterServiceVersion (CSV) status for the nmstate Operator deployment equals Succeeded : USD oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase kubernetes-nmstate-operator.4.15.0-202210210157 Succeeded Create an instance of the nmstate Operator: USD cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF Verify that all pods for the NMState Operator are in a Running state: USD oc get pod -n openshift-nmstate Example output Name Ready Status Restarts Age pod/nmstate-handler-wn55p 1/1 Running 0 77s pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s ... 30.1.2. Uninstalling the Kubernetes NMState Operator You can use the Operator Lifecycle Manager (OLM) to uninstall the Kubernetes NMState Operator, but by design OLM does not delete any associated custom resource definitions (CRDs), custom resources (CRs), or API Services. Before you uninstall the Kubernetes NMState Operator from the Subcription resource used by OLM, identify what Kubernetes NMState Operator resources to delete. This identification ensures that you can delete resources without impacting your running cluster. If you need to reinstall the Kubernetes NMState Operator, see "Installing the Kubernetes NMState Operator by using the CLI" or "Installing the Kubernetes NMState Operator by using the web console". Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the jq CLI tool. You are logged in as a user with cluster-admin privileges. Procedure Unsubscribe the Kubernetes NMState Operator from the Subcription resource by running the following command: USD oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator Find the ClusterServiceVersion (CSV) resource that associates with the Kubernetes NMState Operator: USD oc get --namespace openshift-nmstate clusterserviceversion Example output that lists a CSV resource NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded Delete the CSV resource. After you delete the file, OLM deletes certain resources, such as RBAC , that it created for the Operator. USD oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0 Delete the nmstate CR and any associated Deployment resources by running the following commands: USD oc -n openshift-nmstate delete nmstate nmstate USD oc delete --all deployments --namespace=openshift-nmstate After you deleted the nmstate CR, remove the nmstate-console-plugin console plugin name from the console.operator.openshift.io/cluster CR. Store the position of the nmstate-console-plugin entry that exists among the list of enable plugins by running the following command. The following command uses the jq CLI tool to store the index of the entry in an environment variable named INDEX : INDEX=USD(oc get console.operator.openshift.io cluster -o json | jq -r '.spec.plugins | to_entries[] | select(.value == "nmstate-console-plugin") | .key') Remove the nmstate-console-plugin entry from the console.operator.openshift.io/cluster CR by running the following patch command: USD oc patch console.operator.openshift.io cluster --type=json -p "[{\"op\": \"remove\", \"path\": \"/spec/plugins/USDINDEX\"}]" 1 1 INDEX is an auxiliary variable. You can specify a different name for this variable. Delete all the custom resource definitions (CRDs), such as nmstates.nmstate.io , by running the following commands: USD oc delete crd nmstates.nmstate.io USD oc delete crd nodenetworkconfigurationenactments.nmstate.io USD oc delete crd nodenetworkstates.nmstate.io USD oc delete crd nodenetworkconfigurationpolicies.nmstate.io Delete the namespace: USD oc delete namespace kubernetes-nmstate 30.2. Observing and updating the node network state and configuration 30.2.1. Viewing the network state of a node by using the CLI Node network state is the network configuration for all nodes in the cluster. A NodeNetworkState object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node. Procedure List all the NodeNetworkState objects in the cluster: USD oc get nns Inspect a NodeNetworkState object to view the network on that node. The output in this example has been redacted for clarity: USD oc get nns node01 -o yaml Example output apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: # ... interfaces: # ... route-rules: # ... routes: # ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3 1 The name of the NodeNetworkState object is taken from the node. 2 The currentState contains the complete network configuration for the node, including DNS, interfaces, and routes. 3 Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report. 30.2.2. Viewing the network state of a node from the web console As an administrator, you can use the OpenShift Container Platform web console to observe NodeNetworkState resources and network interfaces, and access network details. Procedure Navigate to Networking NodeNetworkState . In the NodeNetworkState page, you can view the list of NodeNetworkState resources and the corresponding interfaces that are created on the nodes. You can use Filter based on Interface state , Interface type , and IP , or the search bar based on criteria Name or Label , to narrow down the displayed NodeNetworkState resources. To access the detailed information about NodeNetworkState resource, click the NodeNetworkState resource name listed in the Name column . to expand and view the Network Details section for the NodeNetworkState resource, click the > icon . Alternatively, you can click on each interface type under the Network interface column to view the network details. 30.2.3. The NodeNetworkConfigurationPolicy manifest file A NodeNetworkConfigurationPolicy (NNCP) manifest file defines policies that the Kubernetes NMState Operator uses to configure networking for nodes that exist in an OpenShift Container Platform cluster. After you apply a node network policy to a node, the Kubernetes NMState Operator creates an interface on the node. A node network policy includes your requested network configuration and the status of execution for the policy on the cluster as a whole. You can create an NNCP by using either the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. As a postinstallation task you can create an NNCP or edit an existing NNCP. Note Before you create an NNCP, ensure that you read the "Example policy configurations for different interfaces" document. If you want to delete an NNCP, you can use the oc delete nncp command to complete this action. However, this command does not delete any created objects, such as a bridge interface. Deleting the node network policy that added an interface to a node does not change the configuration of the policy on the node. Similarly, removing an interface does not delete the policy, because the Kubernetes NMState Operator recreates the removed interface whenever a pod or a node is restarted. To effectively delete the NNCP, the node network policy, and any created interfaces would typically require the following actions: Edit the NNCP and remove interface details from the file. Ensure that you do not remove name , state , and type parameters from the file. Add state: absent under the interfaces.state section of the NNCP. Run oc apply -f <nncp_file_name> . After the Kubernetes NMState Operator applies the node network policy to each node in your cluster, the interface that was previously created on each node is now marked absent . Run oc delete nncp to delete the NNCP. Additional resources Example policy configurations for different interfaces Removing an interface from nodes 30.2.4. Creating an IP over InfiniBand interface on nodes On the OpenShift Container Platform web console, you can install a Red Hat certified third-party Operator, such as the NVIDIA Network Operator, that supports InfiniBand (IPoIB) mode. Typically, you would use the third-party Operator with other vendor infrastructure to manage resources in an OpenShift Container Platform cluster. To create an IPoIB interface on nodes in your cluster, you must define an InfiniBand (IPoIB) interface in a NodeNetworkConfigurationPolicy (NNCP) manifest file. Important The OpenShift Container Platform documentation describes defining only the IPoIB interface configuration in a NodeNetworkConfigurationPolicy (NNCP) manifest file. You must refer to the NVIDIA and other third-party vendor documentation for the majority of the configuring steps. Red Hat support does not extend to anything external to the NNCP configuration. For more information about the NVIDIA Operator, see Getting Started with Red Hat OpenShift (NVIDIA Docs Hub). Prerequisites You installed a Red Hat certified third-party Operator that supports an IPoIB interface. Procedure Create or edit a NodeNetworkConfigurationPolicy (NNCP) manifest file, and then specify an IPoIB interface in the file. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-ipoib spec: # ... interfaces: - description: "" infiniband: mode: datagram 1 pkey: "0xffff" 2 ipv4: address: - ip: 100.125.3.4 prefix-length: 16 dhcp: false enabled: true ipv6: enabled: false name: ibp27s0 state: up type: infiniband 3 # ... 1 datagram is the default mode for an IPoIB interface, and this mode improves optimizes performance and latency. connected mode is a supported mode but consider only using this mode when you need to adjust the maximum transmission unit (MTU) value to improve node connectivity with surrounding network devices. 2 Supports a string or an integer value. The parameter defines the protection key, or P-key , for the interface for the purposes of authentication and encrypted communications with a third-party vendor, such as NVIDIA. Values None and 0xffff indicate the protection key for the base interface in an InfiniBand system. 3 Sets the type of interface to `infiniband `. Apply the NNCP configuration to each node in your cluster by running the following command. The Kubernetes NMState Operator can then create an IPoIB interface on each node. USD oc apply -f <nncp_file_name> 1 1 Replace <nncp_file_name> with the name of your NNCP file. 30.2.5. Managing policy from the web console You can update the node network configuration, such as adding or removing interfaces from nodes, by applying NodeNetworkConfigurationPolicy manifests to the cluster. Manage the policy from the web console by accessing the list of created policies in the NodeNetworkConfigurationPolicy page under the Networking menu. This page enables you to create, update, monitor, and delete the policies. 30.2.5.1. Monitoring the policy status You can monitor the policy status from the NodeNetworkConfigurationPolicy page. This page displays all the policies created in the cluster in a tabular format, with the following columns: Name The name of the policy created. Matched nodes The count of nodes where the policies are applied. This could be either a subset of nodes based on the node selector or all the nodes on the cluster. Node network state The enactment state of the matched nodes. You can click on the enactment state and view detailed information on the status. To find the desired policy, you can filter the list either based on enactment state by using the Filter option, or by using the search option. 30.2.5.2. Creating a policy You can create a policy by using either a form or YAML in the web console. Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click Create , and select From Form option. In case there are no existing policies, you can alternatively click Create NodeNetworkConfigurationPolicy to createa policy using form. Note To create policy using YAML, click Create , and select With YAML option. The following steps are applicable to create a policy only by using form. Optional: Check the Apply this NodeNetworkConfigurationPolicy only to specific subsets of nodes using the node selector checkbox to specify the nodes where the policy must be applied. Enter the policy name in the Policy name field. Optional: Enter the description of the policy in the Description field. Optional: In the Policy Interface(s) section, a bridge interface is added by default with preset values in editable fields. Edit the values by executing the following steps: Enter the name of the interface in Interface name field. Select the network state from Network state dropdown. The default selected value is Up . Select the type of interface from Type dropdown. The available values are Bridge , Bonding , and Ethernet . The default selected value is Bridge . Note Addition of a VLAN interface by using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy by using form. Optional: In the IP configuration section, check IPv4 checkbox to assign an IPv4 address to the interface, and configure the IP address assignment details: Click IP address to configure the interface with a static IP address, or DHCP to auto-assign an IP address. If you have selected IP address option, enter the IPv4 address in IPV4 address field, and enter the prefix length in Prefix length field. If you have selected DHCP option, uncheck the options that you want to disable. The available options are Auto-DNS , Auto-routes , and Auto-gateway . All the options are selected by default. Optional: Enter the port number in Port field. Optional: Check the checkbox Enable STP to enable STP. Optional: To add an interface to the policy, click Add another interface to the policy . Optional: To remove an interface from the policy, click icon to the interface. Note Alternatively, you can click Edit YAML on the top of the page to continue editing the form using YAML. Click Create to complete policy creation. 30.2.6. Updating the policy 30.2.6.1. Updating the policy by using form Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click the icon placed to the policy you want to edit, and click Edit . Edit the fields that you want to update. Click Save . Note Addition of a VLAN interface using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy using form. 30.2.6.2. Updating the policy by using YAML Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click the policy name under the Name column for the policy you want to edit. Click the YAML tab, and edit the YAML. Click Save . 30.2.6.3. Deleting the policy Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click the icon placed to the policy you want to delete, and click Delete . In the pop-up window, enter the policy name to confirm deletion, and click Delete . 30.2.7. Managing policy by using the CLI 30.2.7.1. Creating an interface on nodes Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The manifest details the requested configuration for the interface. By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector parameter and the appropriate <key>:<value> for your node selector. You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the maxUnavailable field. Procedure Create the NodeNetworkConfigurationPolicy manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example, "10%" , or an absolute value (number), such as 3 . 5 Optional: Human-readable description for the interface. 6 Optional: Specifies the search and server settings for the DNS server. Create the node network policy: USD oc apply -f br1-eth1-policy.yaml 1 1 File name of the node network configuration policy manifest. Additional resources Example for creating multiple interfaces in the same policy Examples of different IP management methods in policies 30.2.7.2. Confirming node network policy updates on nodes When you apply a node network policy, a NodeNetworkConfigurationEnactment object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting. Procedure To confirm that a policy has been applied to the cluster, list the policies and their status: USD oc get nncp Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy: USD oc get nncp <policy> -o yaml Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster: USD oc get nnce Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration: USD oc get nnce <node>.<policy> -o yaml 30.2.7.3. Removing an interface from nodes You can remove an interface from one or more nodes in the cluster by editing the NodeNetworkConfigurationPolicy object and setting the state of the interface to absent . Removing an interface from a node does not automatically restore the node network configuration to a state. If you want to restore the state, you will need to define that node network configuration in the policy. If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a down state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of up and either DHCP or a static IP address. Note Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a NodeNetworkConfigurationPolicy is an object in the cluster, the object only represents the requested configuration. Similarly, removing an interface does not delete the policy. Procedure Update the NodeNetworkConfigurationPolicy manifest used to create the interface. The following example removes a Linux bridge and configures the eth1 NIC with DHCP to avoid losing connectivity: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Changing the state to absent removes the interface. 5 The name of the interface that is to be unattached from the bridge interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. Update the policy on the node and remove the interface: USD oc apply -f <br1-eth1-policy.yaml> 1 1 File name of the policy manifest. 30.2.8. Example policy configurations for different interfaces Before you read the different example NodeNetworkConfigurationPolicy (NNCP) manifest configurations, consider the following factors when you apply a policy to nodes so that your cluster runs under its best performance conditions: When you need to apply a policy to more than one node, create a NodeNetworkConfigurationPolicy manifest for each target node. The Kubernetes NMState Operator applies the policy to each node with a defined NNCP in an unspecified order. Scoping a policy with this approach reduces the length of time for policy application but risks a cluster-wide outage if an error exists in the cluster's configuration. To avoid this type of error, initially apply an NNCP to some nodes, confirm the NNCP is configured correctly for these nodes, and then proceed with applying the policy to the remaining nodes. When you need to apply a policy to many nodes but you only want to create a single NNCP for all the nodes, the Kubernetes NMState Operator applies the policy to each node in sequence. You can set the speed and coverage of policy application for target nodes with the maxUnavailable parameter in the cluster's configuration file. By setting a lower percentage value for the parameter, you can reduce the risk of a cluster-wide outage if the outage impacts the small percentage of nodes that are receiving the policy application. Consider specifying all related network configurations in a single policy. When a node restarts, the Kubernetes NMState Operator cannot control the order to which it applies policies to nodes. The Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object. 30.2.8.1. Example: Linux bridge interface node network configuration policy Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bridge. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 Disables stp in this example. 11 The node NIC to which the bridge attaches. 30.2.8.2. Example: VLAN interface node network configuration policy Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note Define all related configurations for the VLAN interface of a node in a single NodeNetworkConfigurationPolicy manifest. For example, define the VLAN interface for a node and the related routes for the VLAN interface in the same NodeNetworkConfigurationPolicy manifest. When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object. The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. When deploying on bare metal, only the <interface_name>.<vlan_number> VLAN format is supported. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a VLAN. 7 The requested state for the interface after creation. 8 The node NIC to which the VLAN is attached. 9 The VLAN tag. 30.2.8.3. Example: Node network configuration policy for virtual functions (Technology Preview) Update host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster by applying a NodeNetworkConfigurationPolicy manifest. Important Updating host network settings for SR-IOV network VFs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can apply a NodeNetworkConfigurationPolicy manifest to an existing cluster to complete the following tasks: Configure QoS or MTU host network settings for VFs to optimize performance. Add, remove, or update VFs for a network interface. Manage VF bonding configurations. Note To update host network settings for SR-IOV VFs by using NMState on physical functions that are also managed through the SR-IOV Network Operator, you must set the externallyManaged parameter in the relevant SriovNetworkNodePolicy resource to true . For more information, see the Additional resources section. The following YAML file is an example of a manifest that defines QoS policies for a VF. This file includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: qos 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: ens1f0 4 description: Change QOS on VF0 5 type: ethernet 6 state: up 7 ethernet: sr-iov: total-vfs: 3 8 vfs: - id: 0 9 max-tx-rate: 200 10 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example applies to all nodes with the worker role. 4 Name of the physical function (PF) network interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. 7 The requested state for the interface after configuration. 8 The total number of VFs. 9 Identifies the VF with an ID of 0 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. The following YAML file is an example of a manifest that creates a VLAN interface on top of a VF and adds it to a bonded network interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: addvf 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 desiredState: interfaces: - name: ens1f0v1 4 type: ethernet state: up - name: ens1f0v1.477 5 type: vlan state: up vlan: base-iface: ens1f0v1 6 id: 477 - name: bond0 7 description: Add vf 8 type: bond 9 state: up 10 link-aggregation: mode: active-backup 11 options: primary: ens1f1v0.477 12 port: 13 - ens1f1v0.477 - ens1f0v0.477 - ens1f0v1.477 14 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example applies to all nodes with the worker role. 4 Name of the VF network interface. 5 Name of the VLAN network interface. 6 The VF network interface to which the VLAN interface is attached. 7 Name of the bonding network interface. 8 Optional: Human-readable description of the interface. 9 The type of interface. 10 The requested state for the interface after configuration. 11 The bonding policy for the bond. 12 The primary attached bonding port. 13 The ports for the bonded network interface. 14 In this example, this VLAN network interface is added as an additional interface to the bonded network interface. Additional resources Configuring an SR-IOV network device 30.2.8.4. Example: Bond interface node network configuration policy Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note OpenShift Container Platform only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Other bond modes are not supported. The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bond. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 The driver mode for the bond. This example uses an active backup mode. 11 Optional: This example uses miimon to inspect the bond link every 140ms. 12 The subordinate node NICs in the bond. 13 Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to 1500 by default. 30.2.8.5. Example: Ethernet interface node network configuration policy Configure an Ethernet interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 30.2.8.6. Example: Multiple interfaces in the same node network configuration policy You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. The following example YAML file creates a bond that is named bond10 across two NICs and VLAN that is named bond10.103 that connects to the bond. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses hostname node selector. 4 11 Name of the interface. 5 12 Optional: Human-readable description of the interface. 6 13 The type of interface. 7 14 The requested state for the interface after creation. 8 The driver mode for the bond. 9 Optional: This example uses miimon to inspect the bond link every 140ms. 10 The subordinate node NICs in the bond. 15 The node NIC to which the VLAN is attached. 16 The VLAN tag. 17 Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address. 18 Enables ipv4 in this example. 30.2.8.7. Example: Network interface with a VRF instance node network configuration policy Associate a Virtual Routing and Forwarding (VRF) instance with a network interface by applying a NodeNetworkConfigurationPolicy custom resource (CR). Important Associating a VRF instance with a network interface is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By associating a VRF instance with a network interface, you can support traffic isolation, independent routing decisions, and the logical separation of network resources. In a bare-metal environment, you can announce load balancer services through interfaces belonging to a VRF instance by using MetalLB. For more information, see the Additional resources section. The following YAML file is an example of associating a VRF instance to a network interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: "true" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 1 The name of the policy. 2 This example applies the policy to all nodes with the label vrf:true . 3 The name of the interface. 4 The type of interface. This example creates a VRF instance. 5 The node interface to which the VRF attaches. 6 The name of the route table ID for the VRF. Additional resources About virtual routing and forwarding Exposing a service through a network VRF 30.2.9. Capturing the static IP of a NIC attached to a bridge Important Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 30.2.9.1. Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" capture: eth1-nic: interfaces.name=="eth1" 3 eth1-routes: routes.running.-hop-interface=="eth1" br1-routes: capture.eth1-routes | routes.running.-hop-interface := "br1" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: "{{ capture.br1-routes.routes.running }}" 1 The name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 3 The reference to the node NIC to which the bridge attaches. 4 The type of interface. This example creates a bridge. 5 The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the spec.capture.eth1-nic entry. 6 The node NIC to which the bridge attaches. Additional resources The NMPolicy project - Policy syntax 30.2.10. Examples: IP management The following example configuration snippets show different methods of IP management. These examples use the ethernet interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types. 30.2.10.1. Static The following snippet statically configures an IP address on the Ethernet interface: # ... interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true # ... 1 Replace this value with the static IP address for the interface. 30.2.10.2. No IP address The following snippet ensures that the interface has no IP address: # ... interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false # ... Important Always set the state parameter to up when you set both the ipv4.enabled and the ipv6.enabled parameter to false to disable an interface. If you set state: down with this configuration, the interface receives a DHCP IP address because of automatic DHCP assignment. 30.2.10.3. Dynamic host configuration The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS: # ... interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true # ... The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS: # ... interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true # ... 30.2.10.4. DNS By default, the nmstate API stores DNS values globally as against storing them in a network interface. For certain situations, you must configure a network interface to store DNS values. Tip Setting a DNS configuration is comparable to modifying the /etc/resolv.conf file. To define a DNS configuration for a network interface, you must initially specify the dns-resolver section in the network interface's YAML configuration file. To apply an NNCP configuration to your network interface, you need to run the oc apply -f <nncp_file_name> command. Important You cannot use the br-ex bridge, an OVN-Kubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers unless you manually configured a customized br-ex bridge. For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document. The following example shows a default situation that stores DNS values globally: Configure a static DNS without a network interface. Note that when updating the /etc/resolv.conf file on a host node, you do not need to specify an interface, IPv4 or IPv6, in the NodeNetworkConfigurationPolicy (NNCP) manifest. Example of a DNS configuration for a network interface that globally stores DNS values apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251 # ... Important You can specify DNS options under the dns-resolver.config section of your NNCP file as demonstrated in the following example: # ... desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3 # ... If you want to remove the DNS options from your network interface, apply the following configuration to your NNCP and then run the oc apply -f <nncp_file_name> command: # ... dns-resolver: config: {} interfaces: [] # ... The following examples show situations that require configuring a network interface to store DNS values: If you want to rank a static DNS name server over a dynamic DNS name server, define the interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration ( autoconf ) mechanism in the network interface YAML configuration file. Example configuration that adds 192.0.2.1 to DNS name servers retrieved from the DHCPv4 network protocol # ... dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true # ... If you need to configure a network interface to store DNS values instead of adopting the default method, which uses the nmstate API to store DNS values globally, you can set static DNS values and static IP addresses in the network interface YAML file. Important Storing DNS values at the network interface level might cause name resolution issues after you attach the interface to network components, such as an Open vSwitch (OVS) bridge, a Linux bridge, or a bond. Example configuration that stores DNS values at the interface level # ... dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false # ... If you want to set static DNS search domains and dynamic DNS name servers for your network interface, define the dynamic interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration ( autoconf ) mechanism in the network interface YAML configuration file. Example configuration that sets example.com and example.org static DNS search domains along with dynamic DNS name server settings # ... dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true # ... 30.2.10.5. Static routing The following snippet configures a static route and a static IP on interface eth1 . dns-resolver: config: # ... interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 -hop-address: 192.0.2.1 2 -hop-interface: eth1 table-id: 254 # ... 1 The static IP address for the Ethernet interface. 2 The hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface. Important You cannot use the OVN-Kubernetes br-ex bridge as the hop interface when configuring a static route unless you manually configured a customized br-ex bridge. For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document. 30.3. Troubleshooting node network configuration If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as: The configuration fails to be applied on the host. The host loses connection to the default gateway. The host loses connection to the API server. 30.3.1. Troubleshooting an incorrect node network configuration policy configuration You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you applied an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. The example attempts to apply a Linux bridge policy to a cluster that has three control plane nodes and three compute nodes. The policy is not applied because the policy references the wrong interface. To find an error, you need to investigate the available NMState resources. You can then update the policy with the correct configuration. Prerequisites You ensured that an ens01 interface does not exist on your Linux system. Procedure Create a policy on your cluster. The following example creates a simple bridge, br1 that has ens01 as its member: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01 # ... Apply the policy to your network interface: USD oc apply -f ens01-bridge-testfail.yaml Example output nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created Verify the status of the policy by running the following command: USD oc get nncp The output shows that the policy failed: Example output NAME STATUS ens01-bridge-testfail FailedToConfigure The policy status alone does not indicate if it failed on all nodes or a subset of nodes. List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, the output suggests that the problem is with a specific node configuration. If the policy failed on all nodes, the output suggests that the problem is with the policy. USD oc get nnce The output shows that the policy failed on all nodes: Example output NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure View one of the failed enactments. The following command uses the output tool jsonpath to filter the output: USD oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}' Example output [2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01 The example shows the output from an InvalidArgument error that indicates that the ens01 is an unknown port. For this example, you might need to change the port configuration in the policy configuration file. To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the NodeNetworkState object. The following command returns the network configuration for the control-plane-1 node: USD oc get nns control-plane-1 -o yaml The output shows that the interface name on the nodes is ens1 but the failed policy incorrectly uses ens01 : Example output - ipv4: # ... name: ens1 state: up type: ethernet Correct the error by editing the existing policy: USD oc edit nncp ens01-bridge-testfail # ... port: - name: ens1 Save the policy to apply the correction. Check the status of the policy to ensure it updated successfully: USD oc get nncp Example output NAME STATUS ens01-bridge-testfail SuccessfullyConfigured The updated policy is successfully configured on all nodes in the cluster. 30.3.2. Troubleshooting DNS connectivity issues in a disconnected environment If you experience DNS connectivity issues when configuring nmstate in a disconnected environment, you can configure the DNS server to resolve the list of name servers for the domain root-servers.net . Important Ensure that the DNS server includes a name server (NS) entry for the root-servers.net zone. The DNS server does not need to forward a query to an upstream resolver, but the server must return a correct answer for the NS query. 30.3.2.1. Configuring the bind9 DNS named server For a cluster configured to query a bind9 DNS server, you can add the root-servers.net zone to a configuration file that contains at least one NS record. For example you can use the /var/named/named.localhost as a zone file that already matches this criteria. Procedure Add the root-servers.net zone at the end of the /etc/named.conf configuration file by running the following command: USD cat >> /etc/named.conf <<EOF zone "root-servers.net" IN { type master; file "named.localhost"; }; EOF Restart the named service by running the following command: USD systemctl restart named Confirm that the root-servers.net zone is present by running the following command: USD journalctl -u named|grep root-servers.net Example output Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0 Verify that the DNS server can resolve the NS record for the root-servers.net domain by running the following command: USD host -t NS root-servers.net. 127.0.0.1 Example output Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net. 30.3.2.2. Configuring the dnsmasq DNS server If you are using dnsmasq as the DNS server, you can delegate resolution of the root-servers.net domain to another DNS server, for example, by creating a new configuration file that resolves root-servers.net using a DNS server that you specify. Create a configuration file that delegates the domain root-servers.net to another DNS server by running the following command: USD echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf Restart the dnsmasq service by running the following command: USD systemctl restart dnsmasq Confirm that the root-servers.net domain is delegated to another DNS server by running the following command: USD journalctl -u dnsmasq|grep root-servers.net Example output Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net Verify that the DNS server can resolve the NS record for the root-servers.net domain by running the following command: USD host -t NS root-servers.net. 127.0.0.1 Example output Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net. | [
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF",
"cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get clusterserviceversion -n openshift-nmstate -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase kubernetes-nmstate-operator.4.15.0-202210210157 Succeeded",
"cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF",
"oc get pod -n openshift-nmstate",
"Name Ready Status Restarts Age pod/nmstate-handler-wn55p 1/1 Running 0 77s pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s",
"oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator",
"oc get --namespace openshift-nmstate clusterserviceversion",
"NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded",
"oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0",
"oc -n openshift-nmstate delete nmstate nmstate",
"oc delete --all deployments --namespace=openshift-nmstate",
"INDEX=USD(oc get console.operator.openshift.io cluster -o json | jq -r '.spec.plugins | to_entries[] | select(.value == \"nmstate-console-plugin\") | .key')",
"oc patch console.operator.openshift.io cluster --type=json -p \"[{\\\"op\\\": \\\"remove\\\", \\\"path\\\": \\\"/spec/plugins/USDINDEX\\\"}]\" 1",
"oc delete crd nmstates.nmstate.io",
"oc delete crd nodenetworkconfigurationenactments.nmstate.io",
"oc delete crd nodenetworkstates.nmstate.io",
"oc delete crd nodenetworkconfigurationpolicies.nmstate.io",
"oc delete namespace kubernetes-nmstate",
"oc get nns",
"oc get nns node01 -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-ipoib spec: interfaces: - description: \"\" infiniband: mode: datagram 1 pkey: \"0xffff\" 2 ipv4: address: - ip: 100.125.3.4 prefix-length: 16 dhcp: false enabled: true ipv6: enabled: false name: ibp27s0 state: up type: infiniband 3",
"oc apply -f <nncp_file_name> 1",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8",
"oc apply -f br1-eth1-policy.yaml 1",
"oc get nncp",
"oc get nncp <policy> -o yaml",
"oc get nnce",
"oc get nnce <node>.<policy> -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"oc apply -f <br1-eth1-policy.yaml> 1",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: qos 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: ens1f0 4 description: Change QOS on VF0 5 type: ethernet 6 state: up 7 ethernet: sr-iov: total-vfs: 3 8 vfs: - id: 0 9 max-tx-rate: 200 10",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: addvf 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 desiredState: interfaces: - name: ens1f0v1 4 type: ethernet state: up - name: ens1f0v1.477 5 type: vlan state: up vlan: base-iface: ens1f0v1 6 id: 477 - name: bond0 7 description: Add vf 8 type: bond 9 state: up 10 link-aggregation: mode: active-backup 11 options: primary: ens1f1v0.477 12 port: 13 - ens1f1v0.477 - ens1f0v0.477 - ens1f0v1.477 14",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" capture: eth1-nic: interfaces.name==\"eth1\" 3 eth1-routes: routes.running.next-hop-interface==\"eth1\" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := \"br1\" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: \"{{ capture.eth1-nic.interfaces.0.ipv4 }}\" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: \"{{ capture.br1-routes.routes.running }}\"",
"interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true",
"interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false",
"interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true",
"interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251",
"desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3",
"dns-resolver: config: {} interfaces: []",
"dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true",
"dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false",
"dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true",
"dns-resolver: config: interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01",
"oc apply -f ens01-bridge-testfail.yaml",
"nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail FailedToConfigure",
"oc get nnce",
"NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure",
"oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'",
"[2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01",
"oc get nns control-plane-1 -o yaml",
"- ipv4: name: ens1 state: up type: ethernet",
"oc edit nncp ens01-bridge-testfail",
"port: - name: ens1",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail SuccessfullyConfigured",
"cat >> /etc/named.conf <<EOF zone \"root-servers.net\" IN { type master; file \"named.localhost\"; }; EOF",
"systemctl restart named",
"journalctl -u named|grep root-servers.net",
"Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.",
"echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf",
"systemctl restart dnsmasq",
"journalctl -u dnsmasq|grep root-servers.net",
"Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/kubernetes-nmstate |
Chapter 16. Configuring Subsystem Logs | Chapter 16. Configuring Subsystem Logs The Certificate System subsystems create log files that record events related to activities, such as administration, communications using any of the protocols the server supports, and various other processes employed by the subsystems. While a subsystem instance is running, it keeps a log of information and error messages on all the components it manages. Additionally, the Apache and Tomcat web servers generate error and access logs. Each subsystem instance maintains its own log files for installation, audit, and other logged functions. Log plug-in modules are listeners which are implemented as Java TM classes and are registered in the configuration framework. All the log files and rotated log files, except for audit logs, are located in whatever directory was specified in pki_subsystem_log_path when the instance was created with pkispawn . Regular audit logs are located in the log directory with other types of logs, while signed audit logs are written to /var/log/pki/ instance_name / subsystem_name /signedAudit . The default location for logs can be changed by modifying the configuration. 16.1. About Certificate System Logs Certificate System subsystems keep several different kinds of logs, which provide specific information depending on the type of subsystem, types of services, and individual log settings. The kinds of logs that can be kept for an instance depend on the kind of subsystem that it is. 16.1.1. Signed Audit Logs The Certificate System maintains audit logs for all events, such as requesting, issuing and revoking certificates and publishing CRLs. These logs are then signed. This allows authorized access or activity to be detected. An outside auditor can then audit the system if required. The assigned auditor user account is the only account which can view the signed audit logs. This user's certificate is used to sign and encrypt the logs. Audit logging is configured to specify the events that are logged. Signed audit logs are written to /var/log/pki/instance_name/subsystem_name/signedAudit . However, the default location for logs can be changed by modifying the configuration. For more information, see Section 16.3.2, "Using Signed Audit Logs" . 16.1.2. Debug Logs Debug logs, which are enabled by default, are maintained for all subsystems, with varying degrees and types of information. Debug logs contain very specific information for every operation performed by the subsystem, including plug-ins and servlets which are run, connection information, and server request and response messages. The general types of services which are recorded to the debug log are briefly discussed in Section 16.2.1.1, "Services That Are Logged" . These services include authorization requests, processing certificate requests, certificate status checks, and archiving and recovering keys, and access to web services. The debug logs for the CA, OCSP, KRA, and TKS record detailed information about the processes for the subsystem. Each log entry has the following format: The message can be a return message from the subsystem or contain values submitted to the subsystem. For example, the TKS records this message for connecting to an LDAP server: The processor is main , and the message is the message from the server about the LDAP connection, and there is no servlet. The CA, on the other hand, records information about certificate operations as well as subsystem connections: In this case, the processor is the HTTP protocol over the CA's agent port, while it specifies the servlet for handling profiles and contains a message giving a profile parameter (the subsystem owner of a request) and its value (that the KRA initiated the request). Example 16.1. CA Certificate Request Log Messages Likewise, the OCSP shows OCSP request information: 16.1.2.1. Installation Logs All subsystems keep an install log. Every time a subsystem is created either through the initial installation or creating additional instances with pkispawn , an installation file with the complete debug output from the installation, including any errors and, if the installation is successful, the URL and PIN to the configuration interface for the instance. The file is created in the /var/log/pki/ directory for the instance with a name in the form pki- subsystem_name -spawn. timestamp .log . Each line in the install log follows a step in the installation process. Example 16.2. CA Install Log 16.1.2.2. Tomcat Error and Access Logs The CA, KRA, OCSP, TKS, and TPS subsystems use a Tomcat web server instance for their agent and end-entities' interfaces. Error and access logs are created by the Tomcat web server, which are installed with the Certificate System and provide HTTP services. The error log contains the HTTP error messages the server has encountered. The access log lists access activity through the HTTP interface. Logs created by Tomcat: admin. timestamp catalina. timestamp catalina.out host-manager. timestamp localhost. timestamp localhost_access_log. timestamp manager. timestamp These logs are not available or configurable within the Certificate System; they are only configurable within Apache or Tomcat. See the Apache documentation for information about configuring these logs. 16.1.2.3. Self-Tests Log The self-tests log records information obtained during the self-tests run when the server starts or when the self-tests are manually run. The tests can be viewed by opening this log. This log is not configurable through the Console, it can only be configured by changing settings in the CS.cfg file. For instruction on how to configure logs by editing the CS.cfg file, see the Enabling the Publishing Queue section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . The information about logs in this section does not pertain to this log. See Section 14.9, "Running Self-Tests" for more information about self-tests. | [
"[ date:time ] [ processor ]: servlet : message",
"[10/Jun/2020:05:14:51][main]: Established LDAP connection using basic authentication to host localhost port 389 as cn=Directory Manager",
"[06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.requestownerUSD value=KRA-server.example.com-8443",
"[06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.profileapprovedbyUSD value=admin [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.cert_requestUSD value=MIIBozCCAZ8wggEFAgQqTfoHMIHHgAECpQ4wDDEKMAgGA1UEAxMBeKaBnzANBgkqhkiG9w0BAQEFAAOB [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.profileUSD value=true [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.cert_request_typeUSD value=crmf [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.requestversionUSD value=1.0.0 [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.req_localeUSD value=en [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.requestownerUSD value=KRA-server.example.com-8443 [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.dbstatusUSD value=NOT_UPDATED [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.subjectUSD value=uid=jsmith, [email protected] [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.requeststatusUSD value=begin [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.auth_token.userUSD value=uid=KRA-server.example.com-8443,ou=People,dc=example,dc=com [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.req_keyUSD value=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDreuEsBWq9WuZ2MaBwtNYxvkLP^M HcN0cusY7gxLzB+XwQ/VsWEoObGldg6WwJPOcBdvLiKKfC605wFdynbEgKs0fChV^M k9HYDhmJ8hX6+PaquiHJSVNhsv5tOshZkCfMBbyxwrKd8yZ5G5I+2gE9PUznxJaM^M HTmlOqm4HwFxzy0RRQIDAQAB [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.auth_token.authmgrinstnameUSD value=raCertAuth [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.auth_token.uidUSD value=KRA-server.example.com-8443 [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.auth_token.useridUSD value=KRA-server.example.com-8443 [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.requestor_nameUSD value= [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.profileidUSD value=caUserCert [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.auth_token.userdnUSD value=uid=KRA-server.example.com-4747,ou=People,dc=example,dc=com [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.requestidUSD value=20 [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.auth_token.authtimeUSD value=1212782378071 [06/Jun/2020:14:59:38][http-8443;-Processor24]: ProfileSubmitServlet: key=USDrequest.req_x509infoUSD value=MIICIKADAgECAgEAMA0GCSqGSIb3DQEBBQUAMEAxHjAcBgNVBAoTFVJlZGJ1ZGNv^M bXB1dGVyIERvbWFpbjEeMBwGA1UEAxMVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4X^M DTA4MDYwNjE5NTkzOFoXDTA4MTIwMzE5NTkzOFowOzEhMB8GCSqGSIb3DQEJARYS^M anNtaXRoQGV4YW1wbGUuY29tMRYwFAYKCZImiZPyLGQBARMGanNtaXRoMIGfMA0G^M CSqGSIb3DQEBAQUAA4GNADCBiQKBgQDreuEsBWq9WuZ2MaBwtNYxvkLPHcN0cusY^M 7gxLzB+XwQ/VsWEoObGldg6WwJPOcBdvLiKKfC605wFdynbEgKs0fChVk9HYDhmJ^M 8hX6+PaquiHJSVNhsv5tOshZkCfMBbyxwrKd8yZ5G5I+2gE9PUznxJaMHTmlOqm4^M HwFxzy0RRQIDAQABo4HFMIHCMB8GA1UdIwQYMBaAFG8gWeOJIMt+aO8VuQTMzPBU^M 78k8MEoGCCsGAQUFBwEBBD4wPDA6BggrBgEFBQcwAYYuaHR0cDovL3Rlc3Q0LnJl^M ZGJ1ZGNvbXB1dGVyLmxvY2FsOjkwODAvY2Evb2NzcDAOBgNVHQ8BAf8EBAMCBeAw^M HQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMCQGA1UdEQQdMBuBGSRyZXF1^M ZXN0LnJlcXVlc3Rvcl9lbWFpbCQ=",
"[07/Jul/2020:06:25:40][http-11180-Processor25]: OCSPServlet: OCSP Request: [07/Jul/2020:06:25:40][http-11180-Processor25]: OCSPServlet: MEUwQwIBADA+MDwwOjAJBgUrDgMCGgUABBSEWjCarLE6/BiSiENSsV9kHjqB3QQU",
"2015-07-22 20:43:13 pkispawn : INFO ... finalizing 'pki.server.deployment.scriptlets.finalization' 2015-07-22 20:43:13 pkispawn : INFO ....... cp -p /etc/sysconfig/pki/tomcat/pki-tomcat/ca/deployment.cfg /var/log/pki/pki-tomcat/ca/archive/spawn_deployment.cfg.20150722204136 2015-07-22 20:43:13 pkispawn : DEBUG ........... chmod 660 /var/log/pki/pki-tomcat/ca/archive/spawn_deployment.cfg.20150722204136 2015-07-22 20:43:13 pkispawn : DEBUG ........... chown 26445:26445 /var/log/pki/pki-tomcat/ca/archive/spawn_deployment.cfg.20150722204136 2015-07-22 20:43:13 pkispawn : INFO ....... generating manifest file called '/etc/sysconfig/pki/tomcat/pki-tomcat/ca/manifest' 2015-07-22 20:43:13 pkispawn : INFO ....... cp -p /etc/sysconfig/pki/tomcat/pki-tomcat/ca/manifest /var/log/pki/pki-tomcat/ca/archive/spawn_manifest.20150722204136 2015-07-22 20:43:13 pkispawn : DEBUG ........... chmod 660 /var/log/pki/pki-tomcat/ca/archive/spawn_manifest.20150722204136 2015-07-22 20:43:13 pkispawn : DEBUG ........... chown 26445:26445 /var/log/pki/pki-tomcat/ca/archive/spawn_manifest.20150722204136 2015-07-22 20:43:13 pkispawn : INFO ....... executing 'systemctl enable pki-tomcatd.target' 2015-07-22 20:43:13 pkispawn : INFO ....... executing 'systemctl daemon-reload' 2015-07-22 20:43:13 pkispawn : INFO ....... executing 'systemctl restart [email protected]' 2015-07-22 20:43:14 pkispawn : INFO END spawning subsystem 'CA' of instance 'pki-tomcat' 2015-07-22 20:43:14 pkispawn : DEBUG"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Logs |
10.3. Preparing to Deploy Geo-replication | 10.3. Preparing to Deploy Geo-replication This section provides an overview of geo-replication deployment scenarios, lists prerequisites, and describes how to setup the environment for geo-replication session. Section 10.3.1, "Exploring Geo-replication Deployment Scenarios" Section 10.3.2, "Geo-replication Deployment Overview" Section 10.3.3, "Prerequisites" Section 10.3.4.2, "Setting Up your Environment for a Secure Geo-replication Slave" Section 10.3.4.1, "Setting Up your Environment for Geo-replication Session" Section 10.3.5, "Configuring a Meta-Volume" 10.3.1. Exploring Geo-replication Deployment Scenarios Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and the Internet. This section illustrates the most common deployment scenarios for geo-replication, including the following: Geo-replication over LAN Geo-replication over WAN Geo-replication over the Internet Multi-site cascading geo-replication Geo-replication over LAN Geo-replication over WAN Geo-replication over Internet Multi-site cascading Geo-replication 10.3.2. Geo-replication Deployment Overview Deploying geo-replication involves the following steps: Verify that your environment matches the minimum system requirements. See Section 10.3.3, "Prerequisites" . Determine the appropriate deployment scenario. See Section 10.3.1, "Exploring Geo-replication Deployment Scenarios" . Start geo-replication on the master and slave systems. For manual method, see Section 10.4, "Starting Geo-replication" . For gdeploy method, see Starting a geo-replication session in Section 10.5.3, "Controlling geo-replication sessions using gdeploy" . 10.3.3. Prerequisites The following are prerequisites for deploying geo-replication: Note that these prerequisites only need to be carried out once from one cluster to another cluster, so if you are syncing multiple volumes from the same master cluster to the same slave cluster, you need only perform these prerequisites once. The master and slave volumes must use the same version of Red Hat Gluster Storage. Nodes in the slave volume must not be part of the master volume. Two separate trusted storage pools are required. Disable the performance.quick-read option in the slave volume using the following command: Time must be synchronized between all master and slave nodes before geo-replication is configured. Red Hat recommends setting up a network time protocol service to keep time synchronized between bricks and servers, and avoid out-of-time synchronization errors. See Network Time Protocol Setup for more information. Add the required port for geo-replication from the ports listed in the Section 3.1.2, "Port Access Requirements" . Key-based SSH authentication without a password is required between one node of the master volume (the node from which the geo-replication create command will be executed), and one node of the slave volume (the node whose IP/hostname will be mentioned in the slave name when running the geo-replication create command). Create the public and private keys using ssh-keygen (without passphrase) on the master node: Copy the public key to the slave node using the following command: If you are setting up a non-root geo-replicaton session, then copy the public key to the respective user location. Note - Key-based SSH authentication without a password is only required from the master node to the slave node; the slave node does not need this level of access. - ssh-copy-id command does not work if ssh authorized_keys file is configured in the custom location. You must copy the contents of .ssh/id_rsa.pub file from the Master and paste it to authorized_keys file in the custom location on the Slave node. Gsyncd also requires key-based SSH authentication without a password between every node in the master cluster to every node in the slave cluster. The gluster system:: execute gsec_create command creates secret-pem files on all the nodes in the master, and is used to implement the SSH authentication connection. The push-pem option in the geo-replication create command pushes these keys to all slave nodes. For more information on the gluster system::execute gsec_create and push-pem commands, see Section 10.3.4.1, "Setting Up your Environment for Geo-replication Session" . 10.3.4. Setting Up your Environment You can set up your environment for a geo-replication session in the following ways: Section 10.3.4.1, "Setting Up your Environment for Geo-replication Session" - In this method, the slave mount is owned by the root user. Section 10.3.4.2, "Setting Up your Environment for a Secure Geo-replication Slave" - This method is more secure as the slave mount is owned by a normal user. 10.3.4.1. Setting Up your Environment for Geo-replication Session Creating Geo-replication Sessions To create a common pem pub file, run the following command on the master node where the key-based SSH authentication connection is configured: Alternatively, you can create the pem pub file by running the following command on the master node where the key-based SSH authentication connection is configured. This alternate command generates Geo-rep session specific ssh-keys in all the master nodes and collects public keys from all peer nodes. It also provides a detailed view of the command status. Create the geo-replication session using the following command. The push-pem option is needed to perform the necessary pem-file setup on the slave nodes. For example: Note There must be key-based SSH authentication access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. If the verification fails, you can use the force option which will ignore the failed verification and create a geo-replication session. The slave volume is in read-only mode by default. However, in case of a failover-failback situation, the original master is made read-only by default as the session is from the original slave to the original master. Enable shared storage for master and slave volumes: For more information on shared storage, see Section 11.12, "Setting up Shared Storage Volume" . Configure the meta-volume for geo-replication: For example: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Start the geo-replication by running the following command on the master node: For example, Verify the status of the created session by running the following command: 10.3.4.2. Setting Up your Environment for a Secure Geo-replication Slave Geo-replication supports access to Red Hat Gluster Storage slaves through SSH using an unprivileged account (user account with non-zero UID). This method is more secure and it reduces the master's capabilities over slave to the minimum. This feature relies on mountbroker , an internal service of glusterd which manages the mounts for unprivileged slave accounts. You must perform additional steps to configure glusterd with the appropriate mountbroker's access control directives. The following example demonstrates this process: Perform the following steps on all the Slave nodes to setup an auxiliary glusterFS mount for the unprivileged account : In all the slave nodes, create a new group. For example, geogroup . Note You must not use multiple groups for the mountbroker setup. You can create multiple user accounts but the group should be same for all the non-root users. In all the slave nodes, create a unprivileged account. For example, geoaccount . Add geoaccount as a member of geogroup group. On any one of the Slave nodes, run the following command to set up mountbroker root directory and group. For example, On any one of the Slave nodes, run the following commands to add volume and user to the mountbroker service. For example, Check the status of the setup by running the following command: The output displays the mountbroker status for every peer node in the slave cluster. Restart glusterd service on all the Slave nodes. After you setup an auxiliary glusterFS mount for the unprivileged account on all the Slave nodes, perform the following steps to setup a non-root geo-replication session. : Setup key-based SSH authentication from one of the master nodes to the user on one of the slave nodes. For example, to setup key-based SSH authentication to the user geoaccount . Create a common pem pub file by running the following command on the master nodes, where the key-based SSH authentication connection is configured to the user on the slave nodes: Create a geo-replication relationship between the master and the slave to the user by running the following command on the master node: For example, If you have multiple slave volumes and/or multiple accounts, create a geo-replication session with that particular user and volume. For example, Enable shared storage for master and slave volumes: For more information on shared storage, see Section 11.12, "Setting up Shared Storage Volume" . On the slave node, which is used to create relationship, run /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh as a root with user name, master volume name, and slave volume names as the arguments. For example, Configure the meta-volume for geo-replication: For example: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Start the geo-replication with slave user by running the following command on the master node: For example, Verify the status of geo-replication session by running the following command on the master node: Deleting a mountbroker geo-replication options after deleting session After mountbroker geo-replicaton session is deleted, you must remove the volumes per mountbroker user. Important You must first stop and delete the geo-replication session before you can delete volumes from the mountbroker. For more information, see Section 10.4.5, "Stopping a Geo-replication Session" and Section 10.4.6, "Deleting a Geo-replication Session" To remove the volumes per mountbroker user: For example, If the volume to be removed is the last one for the mountbroker user, the user is also removed. Important If you have a secured geo-replication setup, you must ensure to prefix the unprivileged user account to the slave volume in the command. For example, to execute a geo-replication status command, run the following: In this command, geoaccount is the name of the unprivileged user account. 10.3.5. Configuring a Meta-Volume Meta-volume aka gluster_shared_storage is the gluster volume used for internal purposes. Setting use_meta_volume to true enables geo-replication to use shared volume in order to store lock file(s) which helps in handling worker fail-overs. For effective handling of node fail-overs in Master volume, geo-replication requires this shared storage to be available across all nodes of the cluster. Hence, ensure that a gluster volume named gluster_shared_storage is created in the cluster, and is mounted at /var/run/gluster/shared_storage on all the nodes in the cluster. For more information on setting up shared storage volume, see Section 11.12, "Setting up Shared Storage Volume" . Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Configure the meta-volume for geo-replication: For example: Important On RHEL 8, ensure the booleans rsync_full_access on and rsync_client on booleans are set to ON to prevent file permission issues during rsync required by geo-replication. | [
"gluster volume set slavevol performance.quick-read off",
"ssh-keygen",
"ssh-copy-id -i identity_file root@slave_node_IPaddress/Hostname",
"gluster system:: execute gsec_create",
"gluster-georep-sshkey generate +--------------+-------------+---------------+ | NODE | NODE STATUS | KEYGEN STATUS | +--------------+-------------+---------------+ | node1 | UP | OK | | node2 | UP | OK | | node3 | UP | OK | | node4 | UP | OK | | node5 | UP | OK | | localhost | UP | OK | +--------------+-------------+---------------+",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL create push-pem [force]",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol create push-pem",
"gluster volume set all cluster.enable-shared-storage enable",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL start [force]",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status",
"gluster-mountbroker setup <MOUNT ROOT> <GROUP>",
"gluster-mountbroker setup /var/mountbroker-root geogroup",
"gluster-mountbroker add <VOLUME> <USER>",
"gluster-mountbroker add slavevol geoaccount",
"gluster-mountbroker status NODE NODE STATUS MOUNT ROOT GROUP USERS --------------------------------------------------------------------------------------- localhost UP /var/mountbroker-root(OK) geogroup(OK) geoaccount(slavevol) node2 UP /var/mountbroker-root(OK) geogroup(OK) geoaccount(slavevol)",
"service glusterd restart",
"ssh-keygen ssh-copy-id -i identity_file geoaccount@slave_node_IPaddress/Hostname",
"gluster system:: execute gsec_create",
"gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol create push-pem",
"gluster volume geo-replication MASTERVOL geoaccount2@SLAVENODE::slavevol2 create push-pem",
"gluster volume set all cluster.enable-shared-storage enable",
"/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount MASTERVOL SLAVEVOL_NAME",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true",
"gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol start",
"gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol status",
"gluster-mountbroker remove [--volume volume ] [--user user ]",
"gluster-mountbroker remove --volume slavevol --user geoaccount gluster-mountbroker remove --user geoaccount gluster-mountbroker remove --volume slavevol",
"gluster volume geo-replication MASTERVOL geoaccount@SLAVENODE::slavevol status",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-Preparing_to_Deploy_Geo-replication |
Server Developer Guide | Server Developer Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services | [
"Let's pretend to have an extremely long line that does not fit This one is short",
"Let's pretend to have an extremely long line that does not fit This one is short",
"curl -d \"client_id=admin-cli\" -d \"username=admin\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"",
"curl -H \"Authorization: bearer eyJhbGciOiJSUz...\" \"http://localhost:8080/admin/realms/master\"",
"curl -d \"client_id=<YOUR_CLIENT_ID>\" -d \"client_secret=<YOUR_CLIENT_SECRET>\" -d \"grant_type=client_credentials\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"",
"bin/kc.[sh|bat] start --spi-theme-welcome-theme=custom-theme",
"bin/kc.[sh|bat] start --spi-theme-static-max-age=-1 --spi-theme-cache-themes=false --spi-theme-cache-templates=false",
"parent=base import=common/keycloak",
"javaVersion=USD{java.version} unixHome=USD{env.HOME:Unix home not found} windowsHome=USD{env.HOMEPATH:Windows home not found}",
".login-pf body { background: DimGrey none; }",
"styles=css/styles.css",
"styles=css/login.css css/styles.css",
"alert('Hello');",
"scripts=js/script.js",
"body { background-image: url('../img/image.jpg'); background-size: cover; }",
"<img src=\"USD{url.resourcesPath}/img/image.jpg\" alt=\"My image description\">",
"<img src=\"USD{url.resourcesUrl}/img/image.jpg\" alt=\"My image description\">",
"usernameOrEmail=Your Username",
"usernameOrEmail=Brukernavn password=Passord",
"locales=en,no",
"locale_no=Norsk",
"kcLogoIdP-myProvider = fa fa-lock",
"<#import \"template.ftl\" as layout> <h1>HELLO WORLD!</h1>",
"passwordResetSubject=My password recovery passwordResetBody=Reset password link: {0} passwordResetBodyHtml=<a href=\"{0}\">Reset password</a>",
"{ \"themes\": [{ \"name\" : \"mytheme\", \"types\": [ \"login\", \"email\" ] }] }",
"GET /realms/{realm}/broker/{provider_alias}/token HTTP/1.1 Host: localhost:8080 Authorization: Bearer <KEYCLOAK ACCESS TOKEN>",
"/{auth-server-root}/realms/{realm}/broker/{provider}/link?client_id={id}&redirect_uri={uri}&nonce={nonce}&hash={hash}",
"KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName()); AccessToken token = session.getToken(); String clientId = token.getIssuedFor(); String nonce = UUID.randomUUID().toString(); MessageDigest md = null; try { md = MessageDigest.getInstance(\"SHA-256\"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } String input = nonce + token.getSessionState() + clientId + provider; byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8)); String hash = Base64Url.encode(check); request.getSession().setAttribute(\"hash\", hash); String redirectUri = ...; String accountLinkUrl = KeycloakUriBuilder.fromUri(authServerRootUrl) .path(\"/realms/{realm}/broker/{provider}/link\") .queryParam(\"nonce\", nonce) .queryParam(\"hash\", hash) .queryParam(\"client_id\", clientId) .queryParam(\"redirect_uri\", redirectUri).build(realm, provider).toString();",
"package org.acme.provider; import public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory { @Override public ThemeSelectorProvider create(KeycloakSession session) { return new MyThemeSelectorProvider(session); } @Override public void init(Config.Scope config) { } @Override public void postInit(KeycloakSessionFactory factory) { } @Override public void close() { } @Override public String getId() { return \"myThemeSelector\"; } }",
"package org.acme.provider; import public class MyThemeSelectorProvider implements ThemeSelectorProvider { public MyThemeSelectorProvider(KeycloakSession session) { } @Override public String getThemeName(Theme.Type type) { return \"my-theme\"; } @Override public void close() { } }",
"org.acme.provider.MyThemeSelectorProviderFactory",
"bin/kc.[sh|bat] --spi-theme-selector-my-theme-selector-enabled=true --spi-theme-selector-my-theme-selector-theme=my-theme",
"public void init(Config.Scope config) { String themeName = config.get(\"theme\"); }",
"public class MyThemeSelectorProvider implements ThemeSelectorProvider { private KeycloakSession session; public MyThemeSelectorProvider(KeycloakSession session) { this.session = session; } @Override public String getThemeName(Theme.Type type) { return session.getContext().getRealm().getLoginTheme(); } }",
"public class CustomOIDCLoginProtocolFactory extends OIDCLoginProtocolFactory { // Some customizations here @Override public int order() { return 1; } }",
"package org.acme.provider; import public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory, ServerInfoAwareProviderFactory { @Override public Map<String, String> getOperationalInfo() { Map<String, String> ret = new LinkedHashMap<>(); ret.put(\"theme-name\", \"my-theme\"); return ret; } }",
"bin/kc.[sh|bat] build --spi-hostname-provider=default",
"find . -type f -name \"*.jar\" -exec unzip -l {} \\; | grep some.file",
"./kc.sh -Dquarkus.launch.rebuild=true",
"bin/kc.[sh|bat] build --spi-user-cache-infinispan-enabled=false",
"AuthenticationFlowError = Java.type(\"org.keycloak.authentication.AuthenticationFlowError\"); function authenticate(context) { LOG.info(script.name + \" --> trace auth for: \" + user.username); if ( user.username === \"tester\" && user.getAttribute(\"someAttribute\") && user.getAttribute(\"someAttribute\").contains(\"someValue\")) { context.failure(AuthenticationFlowError.INVALID_USER); return; } context.success(); }",
"- User-authentication-subflow REQUIRED -- Cookie ALTERNATIVE -- Identity-provider-redirect ALTERNATIVE - Your-Script-Authenticator REQUIRED",
"// prints can be used to log information for debug purpose. print(\"STARTING CUSTOM MAPPER\"); var inputRequest = keycloakSession.getContext().getHttpRequest(); var params = inputRequest.getDecodedFormParameters(); var output = params.getFirst(\"user_input\"); exports = output;",
"META-INF/keycloak-scripts.json my-script-authenticator.js my-script-policy.js my-script-mapper.js",
"{ \"authenticators\": [ { \"name\": \"My Authenticator\", \"fileName\": \"my-script-authenticator.js\", \"description\": \"My Authenticator from a JS file\" } ], \"policies\": [ { \"name\": \"My Policy\", \"fileName\": \"my-script-policy.js\", \"description\": \"My Policy from a JS file\" } ], \"mappers\": [ { \"name\": \"My Mapper\", \"fileName\": \"my-script-mapper.js\", \"description\": \"My Mapper from a JS file\" } ], \"saml-mappers\": [ { \"name\": \"My Mapper\", \"fileName\": \"my-script-mapper.js\", \"description\": \"My Mapper from a JS file\" } ] }",
"package org.keycloak.storage; public interface UserStorageProvider extends Provider { /** * Callback when a realm is removed. Implement this if, for example, you want to do some * cleanup in your user storage when a realm is removed * * @param realm */ default void preRemove(RealmModel realm) { } /** * Callback when a group is removed. Allows you to do things like remove a user * group mapping in your external store if appropriate * * @param realm * @param group */ default void preRemove(RealmModel realm, GroupModel group) { } /** * Callback when a role is removed. Allows you to do things like remove a user * role mapping in your external store if appropriate * @param realm * @param role */ default void preRemove(RealmModel realm, RoleModel role) { } }",
"package org.keycloak.storage; /** * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserStorageProviderFactory<T extends UserStorageProvider> extends ComponentFactory<T, UserStorageProvider> { /** * This is the name of the provider and will be shown in the admin console as an option. * * @return */ @Override String getId(); /** * called per Keycloak transaction. * * @param session * @param model * @return */ T create(KeycloakSession session, ComponentModel model); }",
"public class FileProviderFactory implements UserStorageProviderFactory<FileProvider> { public String getId() { return \"file-provider\"; } public FileProvider create(KeycloakSession session, ComponentModel model) { }",
"package org.keycloak.models; public interface UserModel extends RoleMapperModel { String getId(); String getUsername(); void setUsername(String username); String getFirstName(); void setFirstName(String firstName); String getLastName(); void setLastName(String lastName); String getEmail(); void setEmail(String email); }",
"\"f:\" + component id + \":\" + external id",
"f:332a234e31234:wburke",
"org.keycloak.examples.federation.properties.ClasspathPropertiesStorageFactory org.keycloak.examples.federation.properties.FilePropertiesStorageFactory",
"public class PropertyFileUserStorageProvider implements UserStorageProvider, UserLookupProvider, CredentialInputValidator, CredentialInputUpdater { }",
"protected KeycloakSession session; protected Properties properties; protected ComponentModel model; // map of loaded users in this transaction protected Map<String, UserModel> loadedUsers = new HashMap<>(); public PropertyFileUserStorageProvider(KeycloakSession session, ComponentModel model, Properties properties) { this.session = session; this.model = model; this.properties = properties; }",
"@Override public UserModel getUserByUsername(RealmModel realm, String username) { UserModel adapter = loadedUsers.get(username); if (adapter == null) { String password = properties.getProperty(username); if (password != null) { adapter = createAdapter(realm, username); loadedUsers.put(username, adapter); } } return adapter; } protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } @Override public UserModel getUserById(RealmModel realm, String id) { StorageId storageId = new StorageId(id); String username = storageId.getExternalId(); return getUserByUsername(realm, username); } @Override public UserModel getUserByEmail(RealmModel realm, String email) { return null; }",
"\"f:\" + component id + \":\" + username",
"@Override public boolean isConfiguredFor(RealmModel realm, UserModel user, String credentialType) { String password = properties.getProperty(user.getUsername()); return credentialType.equals(PasswordCredentialModel.TYPE) && password != null; } @Override public boolean supportsCredentialType(String credentialType) { return credentialType.equals(PasswordCredentialModel.TYPE); } @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType())) return false; String password = properties.getProperty(user.getUsername()); if (password == null) return false; return password.equals(input.getChallengeResponse()); }",
"@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (input.getType().equals(PasswordCredentialModel.TYPE)) throw new ReadOnlyException(\"user is read only for this update\"); return false; } @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { } @Override public Stream<String> getDisableableCredentialTypesStream(RealmModel realm, UserModel user) { return Stream.empty(); }",
"public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { public static final String PROVIDER_NAME = \"readonly-property-file\"; @Override public String getId() { return PROVIDER_NAME; }",
"private static final Logger logger = Logger.getLogger(PropertyFileUserStorageProviderFactory.class); protected Properties properties = new Properties(); @Override public void init(Config.Scope config) { InputStream is = getClass().getClassLoader().getResourceAsStream(\"/users.properties\"); if (is == null) { logger.warn(\"Could not find users.properties in classpath\"); } else { try { properties.load(is); } catch (IOException ex) { logger.error(\"Failed to load users.properties file\", ex); } } } @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }",
"kc.[sh|bat] start --spi-storage-readonly-property-file-path=/other-users.properties",
"public void init(Config.Scope config) { String path = config.get(\"path\"); InputStream is = getClass().getClassLoader().getResourceAsStream(path); }",
"@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }",
"org.keycloak.examples.federation.properties.FilePropertiesStorageFactory",
"List<ProviderConfigProperty> getConfigProperties(); default void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { } default void onCreate(KeycloakSession session, RealmModel realm, ComponentModel model) { } default void onUpdate(KeycloakSession session, RealmModel realm, ComponentModel model) { }",
"public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { protected static final List<ProviderConfigProperty> configMetadata; static { configMetadata = ProviderConfigurationBuilder.create() .property().name(\"path\") .type(ProviderConfigProperty.STRING_TYPE) .label(\"Path\") .defaultValue(\"USD{jboss.server.config.dir}/example-users.properties\") .helpText(\"File path to properties file\") .add().build(); } @Override public List<ProviderConfigProperty> getConfigProperties() { return configMetadata; }",
"@Override public void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel config) throws ComponentValidationException { String fp = config.getConfig().getFirst(\"path\"); if (fp == null) throw new ComponentValidationException(\"user property file does not exist\"); fp = EnvUtil.replace(fp); File file = new File(fp); if (!file.exists()) { throw new ComponentValidationException(\"user property file does not exist\"); } }",
"@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { String path = model.getConfig().getFirst(\"path\"); Properties props = new Properties(); try { InputStream is = new FileInputStream(path); props.load(is); is.close(); } catch (IOException e) { throw new RuntimeException(e); } return new PropertyFileUserStorageProvider(session, model, props); }",
"public void save() { String path = model.getConfig().getFirst(\"path\"); path = EnvUtil.replace(path); try { FileOutputStream fos = new FileOutputStream(path); properties.store(fos, \"\"); fos.close(); } catch (IOException e) { throw new RuntimeException(e); } }",
"public static final String UNSET_PASSWORD=\"#USD!-UNSET-PASSWORD\"; @Override public UserModel addUser(RealmModel realm, String username) { synchronized (properties) { properties.setProperty(username, UNSET_PASSWORD); save(); } return createAdapter(realm, username); } @Override public boolean removeUser(RealmModel realm, UserModel user) { synchronized (properties) { if (properties.remove(user.getUsername()) == null) return false; save(); return true; } }",
"@Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType()) || !(input instanceof UserCredentialModel)) return false; UserCredentialModel cred = (UserCredentialModel)input; String password = properties.getProperty(user.getUsername()); if (password == null || UNSET_PASSWORD.equals(password)) return false; return password.equals(cred.getValue()); }",
"@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (!(input instanceof UserCredentialModel)) return false; if (!input.getType().equals(PasswordCredentialModel.TYPE)) return false; UserCredentialModel cred = (UserCredentialModel)input; synchronized (properties) { properties.setProperty(user.getUsername(), cred.getValue()); save(); } return true; }",
"@Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { if (!credentialType.equals(PasswordCredentialModel.TYPE)) return; synchronized (properties) { properties.setProperty(user.getUsername(), UNSET_PASSWORD); save(); } } private static final Set<String> disableableTypes = new HashSet<>(); static { disableableTypes.add(PasswordCredentialModel.TYPE); } @Override public Stream<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return disableableTypes.stream(); }",
"@Override public int getUsersCount(RealmModel realm) { return properties.size(); } @Override public Stream<UserModel> searchForUserStream(RealmModel realm, String search, Integer firstResult, Integer maxResults) { Predicate<String> predicate = \"*\".equals(search) ? username -> true : username -> username.contains(search); return properties.keySet().stream() .map(String.class::cast) .filter(predicate) .skip(firstResult) .map(username -> getUserByUsername(realm, username)) .limit(maxResults); }",
"@Override public Stream<UserModel> searchForUserStream(RealmModel realm, Map<String, String> params, Integer firstResult, Integer maxResults) { // only support searching by username String usernameSearchString = params.get(\"username\"); if (usernameSearchString != null) return searchForUserStream(realm, usernameSearchString, firstResult, maxResults); // if we are not searching by username, return all users return searchForUserStream(realm, \"*\", firstResult, maxResults); }",
"@Override public Stream<UserModel> getGroupMembersStream(RealmModel realm, GroupModel group, Integer firstResult, Integer maxResults) { return Stream.empty(); } @Override public Stream<UserModel> searchForUserByUserAttributeStream(RealmModel realm, String attrName, String attrValue) { return Stream.empty(); }",
"package org.keycloak.storage.federated; public interface UserFederatedStorageProvider extends Provider, UserAttributeFederatedStorage, UserBrokerLinkFederatedStorage, UserConsentFederatedStorage, UserNotBeforeFederatedStorage, UserGroupMembershipFederatedStorage, UserRequiredActionsFederatedStorage, UserRoleMappingsFederatedStorage, UserFederatedUserCredentialStore { }",
"protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapterFederatedStorage(session, realm, model) { @Override public String getUsername() { return username; } @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } } }; }",
"protected UserModel createAdapter(RealmModel realm, String username) { UserModel local = UserStoragePrivateUtil.userLocalStorage(session).getUserByUsername(realm, username); if (local == null) { local = UserStoragePrivateUtil.userLocalStorage(session).addUser(realm, username); local.setFederationLink(model.getId()); } return new UserModelDelegate(local) { @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } super.setUsername(username); } }; }",
"package org.keycloak.storage.user; public interface ImportedUserValidation { /** * If this method returns null, then the user in local storage will be removed * * @param realm * @param user * @return null if user no longer valid */ UserModel validate(RealmModel realm, UserModel user); }",
"package org.keycloak.storage.user; public interface ImportSynchronization { SynchronizationResult sync(KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); SynchronizationResult syncSince(Date lastSync, KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); }",
"/** * All these methods effect an entire cluster of Keycloak instances. * * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserCache extends UserProvider { /** * Evict user from cache. * * @param user */ void evict(RealmModel realm, UserModel user); /** * Evict users of a specific realm * * @param realm */ void evict(RealmModel realm); /** * Clear cache entirely. * */ void clear(); }",
"public interface OnUserCache { void onCache(RealmModel realm, CachedUserModel user, UserModel delegate); }",
"public interface CachedUserModel extends UserModel { /** * Invalidates the cache for this user and returns a delegate that represents the actual data provider * * @return */ UserModel getDelegateForUpdate(); boolean isMarkedForEviction(); /** * Invalidate the cache for this model * */ void invalidate(); /** * When was the model was loaded from database. * * @return */ long getCacheTimestamp(); /** * Returns a map that contains custom things that are cached along with this model. You can write to this map. * * @return */ ConcurrentHashMap getCachedWith(); }",
"/admin/realms/{realm-name}/components",
"public interface ComponentsResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type, @QueryParam(\"name\") String name); @POST @Consumes(MediaType.APPLICATION_JSON) Response add(ComponentRepresentation rep); @Path(\"{id}\") ComponentResource component(@PathParam(\"id\") String id); } public interface ComponentResource { @GET public ComponentRepresentation toRepresentation(); @PUT @Consumes(MediaType.APPLICATION_JSON) public void update(ComponentRepresentation rep); @DELETE public void remove(); }",
"import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; Keycloak keycloak = Keycloak.getInstance( \"http://localhost:8080\", \"master\", \"admin\", \"password\", \"admin-cli\"); RealmResource realmResource = keycloak.realm(\"master\"); RealmRepresentation realm = realmResource.toRepresentation(); ComponentRepresentation component = new ComponentRepresentation(); component.setName(\"home\"); component.setProviderId(\"readonly-property-file\"); component.setProviderType(\"org.keycloak.storage.UserStorageProvider\"); component.setParentId(realm.getId()); component.setConfig(new MultivaluedHashMap()); component.getConfig().putSingle(\"path\", \"~/users.properties\"); realmResource.components().add(component); // retrieve a component List<ComponentRepresentation> components = realmResource.components().query(realm.getId(), \"org.keycloak.storage.UserStorageProvider\", \"home\"); component = components.get(0); // Update a component component.getConfig().putSingle(\"path\", \"~/my-users.properties\"); realmResource.components().component(component.getId()).update(component); // Remove a component realmREsource.components().component(component.getId()).remove();",
"public class CustomQueryProvider extends UserQueryProvider.Streams { @Override Stream<UserModel> getUsersStream(RealmModel realm, Integer firstResult, Integer maxResults) { // custom logic here } @Override Stream<UserModel> searchForUserStream(String search, RealmModel realm) { // custom logic here } }",
"char[] c; try (VaultCharSecret cSecret = session.vault().getCharSecret(SECRET_NAME)) { // ... use cSecret c = cSecret.getAsArray().orElse(null); // if c != null, it now contains password } // if c != null, it now contains garbage"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/server_developer_guide/index |
Chapter 7. DMN model execution | Chapter 7. DMN model execution You can create or import DMN files in your Red Hat Decision Manager project using Business Central or package the DMN files as part of your project knowledge JAR (KJAR) file without Business Central. After you implement your DMN files in your Red Hat Decision Manager project, you can execute the DMN decision service by deploying the KIE container that contains it to KIE Server for remote access or by manipulating the KIE container directly as a dependency of the calling application. Other options for creating and deploying DMN knowledge packages are also available, and most are similar for all types of knowledge assets, such as DRL files or process definitions. For information about including external DMN assets with your project packaging and deployment method, see Packaging and deploying an Red Hat Decision Manager project . 7.1. Embedding a DMN call directly in a Java application A KIE container is local when the knowledge assets are either embedded directly into the calling program or are physically pulled in using Maven dependencies for the KJAR. You typically embed knowledge assets directly into a project if there is a tight relationship between the version of the code and the version of the DMN definition. Any changes to the decision take effect after you have intentionally updated and redeployed the application. A benefit of this approach is that proper operation does not rely on any external dependencies to the run time, which can be a limitation of locked-down environments. Using Maven dependencies enables further flexibility because the specific version of the decision can dynamically change, (for example, by using a system property), and it can be periodically scanned for updates and automatically updated. This introduces an external dependency on the deploy time of the service, but executes the decision locally, reducing reliance on an external service being available during run time. Prerequisites You have built the DMN project as a KJAR artifact and deployed it to a Maven repository, or you have included your DMN assets as part of your project classpath: For more information about project packaging and deployment and executable models, see Packaging and deploying an Red Hat Decision Manager project . Procedure In your client application, add the following dependencies to the relevant classpath of your Java project: <!-- Required for the DMN runtime API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHDM product and maven library version? . Create a KIE container from classpath or ReleaseId : KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); Alternative option: KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer(); Obtain DMNRuntime from the KIE container and a reference to the DMN model to be evaluated, by using the model namespace and modelName : DMNRuntime dmnRuntime = KieRuntimeFactory.of(kieContainer.getKieBase()).get(DMNRuntime.class); String namespace = "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a"; String modelName = "dmn-movieticket-ageclassification"; DMNModel dmnModel = dmnRuntime.getModel(namespace, modelName); Execute the decision services for the desired model: DMNContext dmnContext = dmnRuntime.newContext(); 1 for (Integer age : Arrays.asList(1,12,13,64,65,66)) { dmnContext.set("Age", age); 2 DMNResult dmnResult = dmnRuntime.evaluateAll(dmnModel, dmnContext); 3 for (DMNDecisionResult dr : dmnResult.getDecisionResults()) { 4 log.info("Age: " + age + ", " + "Decision: '" + dr.getDecisionName() + "', " + "Result: " + dr.getResult()); } } 1 Instantiate a new DMN Context to be the input for the model evaluation. Note that this example is looping through the Age Classification decision multiple times. 2 Assign input variables for the input DMN context. 3 Evaluate all DMN decisions defined in the DMN model. 4 Each evaluation may result in one or more results, creating the loop. This example prints the following output: If the DMN model was not previously compiled as an executable model for more efficient execution, you can enable the following property when you execute your DMN models: 7.2. Executing a DMN service using the KIE Server Java client API The KIE Server Java client API provides a lightweight approach to invoking a remote DMN service either through the REST or JMS interfaces of KIE Server. This approach reduces the number of runtime dependencies necessary to interact with a KIE base. Decoupling the calling code from the decision definition also increases flexibility by enabling them to iterate independently at the appropriate pace. For more information about the KIE Server Java client API, see Interacting with Red Hat Decision Manager using KIE APIs . Prerequisites KIE Server is installed and configured, including a known user name and credentials for a user with the kie-server role. For installation options, see Planning a Red Hat Decision Manager installation . You have built the DMN project as a KJAR artifact and deployed it to KIE Server: For more information about project packaging and deployment and executable models, see Packaging and deploying an Red Hat Decision Manager project . You have the ID of the KIE container containing the DMN model. If more than one model is present, you must also know the model namespace and model name of the relevant model. Procedure In your client application, add the following dependency to the relevant classpath of your Java project: <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHDM product and maven library version? . Instantiate a KieServicesClient instance with the appropriate connection information. Example: KieServicesConfiguration conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); 1 conf.setMarshallingFormat(MarshallingFormat.JSON); 2 KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(conf); 1 The connection information: Example URL: http://localhost:8080/kie-server/services/rest/server The credentials should reference a user with the kie-server role. 2 The Marshalling format is an instance of org.kie.server.api.marshalling.MarshallingFormat . It controls whether the messages will be JSON or XML. Options for Marshalling format are JSON, JAXB, or XSTREAM. Obtain a DMNServicesClient from the KIE server Java client connected to the related KIE Server by invoking the method getServicesClient() on the KIE server Java client instance: DMNServicesClient dmnClient = kieServicesClient.getServicesClient(DMNServicesClient.class ); The dmnClient can now execute decision services on KIE Server. Execute the decision services for the desired model. Example: for (Integer age : Arrays.asList(1,12,13,64,65,66)) { DMNContext dmnContext = dmnClient.newContext(); 1 dmnContext.set("Age", age); 2 ServiceResponse<DMNResult> serverResp = 3 dmnClient.evaluateAll(USDkieContainerId, USDmodelNamespace, USDmodelName, dmnContext); DMNResult dmnResult = serverResp.getResult(); 4 for (DMNDecisionResult dr : dmnResult.getDecisionResults()) { log.info("Age: " + age + ", " + "Decision: '" + dr.getDecisionName() + "', " + "Result: " + dr.getResult()); } } 1 Instantiate a new DMN Context to be the input for the model evaluation. Note that this example is looping through the Age Classification decision multiple times. 2 Assign input variables for the input DMN Context. 3 Evaluate all the DMN Decisions defined in the DMN model: USDkieContainerId is the ID of the container where the KJAR containing the DMN model is deployed USDmodelNamespace is the namespace for the model. USDmodelName is the name for the model. 4 The DMN Result object is available from the server response. At this point, the dmnResult contains all the decision results from the evaluated DMN model. You can also execute only a specific DMN decision in the model by using alternative methods of the DMNServicesClient . Note If the KIE container only contains one DMN model, you can omit USDmodelNamespace and USDmodelName because the KIE Server API selects it by default. 7.3. Executing a DMN service using the KIE Server REST API Directly interacting with the REST endpoints of KIE Server provides the most separation between the calling code and the decision logic definition. The calling code is completely free of direct dependencies, and you can implement it in an entirely different development platform such as Node.js or .NET . The examples in this section demonstrate Nix-style curl commands but provide relevant information to adapt to any REST client. When you use a REST endpoint of KIE Server, the best practice is to define a domain object POJO Java class, annotated with standard KIE Server marshalling annotations. For example, the following code is using a domain object Person class that is annotated properly: Example POJO Java class @javax.xml.bind.annotation.XmlAccessorType(javax.xml.bind.annotation.XmlAccessType.FIELD) public class Person implements java.io.Serializable { static final long serialVersionUID = 1L; private java.lang.String id; private java.lang.String name; @javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter(org.kie.internal.jaxb.LocalDateXmlAdapter.class) private java.time.LocalDate dojoining; public Person() { } public java.lang.String getId() { return this.id; } public void setId(java.lang.String id) { this.id = id; } public java.lang.String getName() { return this.name; } public void setName(java.lang.String name) { this.name = name; } public java.time.LocalDate getDojoining() { return this.dojoining; } public void setDojoining(java.time.LocalDate dojoining) { this.dojoining = dojoining; } public Person(java.lang.String id, java.lang.String name, java.time.LocalDate dojoining) { this.id = id; this.name = name; this.dojoining = dojoining; } } For more information about the KIE Server REST API, see Interacting with Red Hat Decision Manager using KIE APIs . Prerequisites KIE Server is installed and configured, including a known user name and credentials for a user with the kie-server role. For installation options, see Planning a Red Hat Decision Manager installation . You have built the DMN project as a KJAR artifact and deployed it to KIE Server: For more information about project packaging and deployment and executable models, see Packaging and deploying an Red Hat Decision Manager project . You have the ID of the KIE container containing the DMN model. If more than one model is present, you must also know the model namespace and model name of the relevant model. Procedure Determine the base URL for accessing the KIE Server REST API endpoints. This requires knowing the following values (with the default local deployment values as an example): Host ( localhost ) Port ( 8080 ) Root context ( kie-server ) Base REST path ( services/rest/ ) Example base URL in local deployment: http://localhost:8080/kie-server/services/rest/ Determine user authentication requirements. When users are defined directly in the KIE Server configuration, HTTP Basic authentication is used and requires the user name and password. Successful requests require that the user have the kie-server role. The following example demonstrates how to add credentials to a curl request: If KIE Server is configured with Red Hat Single Sign-On, the request must include a bearer token: curl -H "Authorization: bearer USDTOKEN" <request> Specify the format of the request and response. The REST API endpoints work with both JSON and XML formats and are set using request headers: JSON XML Optional: Query the container for a list of deployed decision models: [GET] server/containers/{containerId}/dmn Example curl request: Sample XML output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <response type="SUCCESS" msg="OK models successfully retrieved from container 'MovieDMNContainer'"> <dmn-model-info-list> <model> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <model-id>_99</model-id> <decisions> <dmn-decision-info> <decision-id>_3</decision-id> <decision-name>AgeClassification</decision-name> </dmn-decision-info> </decisions> </model> </dmn-model-info-list> </response> Sample JSON output: { "type" : "SUCCESS", "msg" : "OK models successfully retrieved from container 'MovieDMNContainer'", "result" : { "dmn-model-info-list" : { "models" : [ { "model-namespace" : "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a", "model-name" : "dmn-movieticket-ageclassification", "model-id" : "_99", "decisions" : [ { "decision-id" : "_3", "decision-name" : "AgeClassification" } ] } ] } } } Execute the model: [POST] server/containers/{containerId}/dmn Example curl request: Example JSON request: { "model-namespace" : "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a", "model-name" : "dmn-movieticket-ageclassification", "decision-name" : [ ], "decision-id" : [ ], "dmn-context" : {"Age" : 66} } Example XML request (JAXB format): <?xml version="1.0" encoding="UTF-8"?> <dmn-evaluation-context> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <dmn-context xsi:type="jaxbListWrapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Age"> <value xsi:type="xs:int" xmlns:xs="http://www.w3.org/2001/XMLSchema">66</value> </element> </dmn-context> </dmn-evaluation-context> Note Regardless of the request format, the request requires the following elements: Model namespace Model name Context object containing input values Example JSON response: { "type" : "SUCCESS", "msg" : "OK from container 'MovieDMNContainer'", "result" : { "dmn-evaluation-result" : { "messages" : [ ], "model-namespace" : "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a", "model-name" : "dmn-movieticket-ageclassification", "decision-name" : [ ], "dmn-context" : { "Age" : 66, "AgeClassification" : "Senior" }, "decision-results" : { "_3" : { "messages" : [ ], "decision-id" : "_3", "decision-name" : "AgeClassification", "result" : "Senior", "status" : "SUCCEEDED" } } } } } Example XML (JAXB format) response: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <response type="SUCCESS" msg="OK from container 'MovieDMNContainer'"> <dmn-evaluation-result> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <dmn-context xsi:type="jaxbListWrapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Age"> <value xsi:type="xs:int" xmlns:xs="http://www.w3.org/2001/XMLSchema">66</value> </element> <element xsi:type="jaxbStringObjectPair" key="AgeClassification"> <value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">Senior</value> </element> </dmn-context> <messages/> <decisionResults> <entry> <key>_3</key> <value> <decision-id>_3</decision-id> <decision-name>AgeClassification</decision-name> <result xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">Senior</result> <messages/> <status>SUCCEEDED</status> </value> </entry> </decisionResults> </dmn-evaluation-result> </response> 7.4. REST endpoints for specific DMN models Red Hat Decision Manager provides model-specific DMN KIE Server endpoints that you can use to interact with your specific DMN model without using the Business Central user interface. For each DMN model in a container in Red Hat Decision Manager, the following KIE Server REST endpoints are automatically generated based on the content of the DMN model: POST /server/containers/{containerId}/dmn/models/{modelname} : A business-domain endpoint for evaluating a specified DMN model in a container POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName} : A business-domain endpoint for evaluating a specified decision service component in a specific DMN model available in a container POST /server/containers/{containerId}/dmn/models/{modelname}/dmnresult : An endpoint for evaluating a specified DMN model containing customized body payload and returning a DMNResult response, including business-domain context, helper messages, and helper decision pointers POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName}/dmnresult : An endpoint for evaluating a specified decision service component in a specific DMN model and returning a DMNResult response, including the business-domain context, helper messages, and help decision pointers for the decision service GET /server/containers/{containerId}/dmn/models/{modelname} : An endpoint for returning standard DMN XML without decision logic and containing the inputs and decisions of the specified DMN model GET /server/containers/{containerId}/dmn/openapi.json (|.yaml) : An endpoint for retrieving Swagger or OAS for the DMN models in a specified container You can use these endpoints to interact with a DMN model or a specific decision service within a model. As you decide between using business-domain and dmnresult variants of these REST endpoints, review the following considerations: REST business-domain endpoints : Use this endpoint type if a client application is only concerned with a positive evaluation outcome, is not interested in parsing Info or Warn messages, and only needs an HTTP 5xx response for any errors. This type of endpoint is also helpful for application-like clients, due to singleton coercion of decision service results that resemble the DMN modeling behavior. REST dmnresult endpoints : Use this endpoint type if a client needs to parse Info , Warn , or Error messages in all cases. For each endpoint, use a REST client or curl utility to send requests with the following components: Base URL : http:// HOST : PORT /kie-server/services/rest/ Path parameters : {containerId} : The string identifier of the container, such as mykjar-project {modelName} : The string identifier of the DMN model, such as Traffic Violation {decisionServiceName} : The string identifier of the decision service component in the DMN DRG, such as TrafficViolationDecisionService dmnresult : The string identifier that enables the endpoint to return a full DMNResult response with more detailed Info , Warn , and Error messaging HTTP headers : For POST requests only: accept : application/json content-type : application/json HTTP methods : GET or POST The examples in the following endpoints are based on a mykjar-project container that contains a Traffic Violation DMN model, containing a TrafficViolationDecisionService decision service component. For all of these endpoints, if a DMN evaluation Error message occurs, a DMNResult response is returned along with an HTTP 5xx error. If a DMN Info or Warn message occurs, the relevant response is returned along with the business-domain REST body, in the X-Kogito-decision-messages extended HTTP header, to be used for client-side business logic. When there is a requirement of more refined client-side business logic, the client can use the dmnresult variant of the endpoints. Retrieve Swagger or OAS for DMN models in a specified container GET /server/containers/{containerId}/dmn/openapi.json (|.yaml) Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/openapi.json (|.yaml) Return the DMN XML without decision logic GET /server/containers/{containerId}/dmn/models/{modelname} Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation Example curl request Example response (XML) <?xml version='1.0' encoding='UTF-8'?> <dmn:definitions xmlns:dmn="http://www.omg.org/spec/DMN/20180521/MODEL/" xmlns="https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF" xmlns:di="http://www.omg.org/spec/DMN/20180521/DI/" xmlns:kie="http://www.drools.org/kie/dmn/1.2" xmlns:feel="http://www.omg.org/spec/DMN/20180521/FEEL/" xmlns:dmndi="http://www.omg.org/spec/DMN/20180521/DMNDI/" xmlns:dc="http://www.omg.org/spec/DMN/20180521/DC/" id="_1C792953-80DB-4B32-99EB-25FBE32BAF9E" name="Traffic Violation" expressionLanguage="http://www.omg.org/spec/DMN/20180521/FEEL/" typeLanguage="http://www.omg.org/spec/DMN/20180521/FEEL/" namespace="https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF"> <dmn:extensionElements/> <dmn:itemDefinition id="_63824D3F-9173-446D-A940-6A7F0FA056BB" name="tDriver" isCollection="false"> <dmn:itemComponent id="_9DAB5DAA-3B44-4F6D-87F2-95125FB2FEE4" name="Name" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_856BA8FA-EF7B-4DF9-A1EE-E28263CE9955" name="Age" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_FDC2CE03-D465-47C2-A311-98944E8CC23F" name="State" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_D6FD34C4-00DC-4C79-B1BF-BBCF6FC9B6D7" name="City" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_7110FE7E-1A38-4C39-B0EB-AEEF06BA37F4" name="Points" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id="_40731093-0642-4588-9183-1660FC55053B" name="tViolation" isCollection="false"> <dmn:itemComponent id="_39E88D9F-AE53-47AD-B3DE-8AB38D4F50B3" name="Code" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_1648EA0A-2463-4B54-A12A-D743A3E3EE7B" name="Date" isCollection="false"> <dmn:typeRef>date</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_9F129EAA-4E71-4D99-B6D0-84EEC3AC43CC" name="Type" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> <dmn:allowedValues kie:constraintType="enumeration" id="_626A8F9C-9DD1-44E0-9568-0F6F8F8BA228"> <dmn:text>"speed", "parking", "driving under the influence"</dmn:text> </dmn:allowedValues> </dmn:itemComponent> <dmn:itemComponent id="_DDD10D6E-BD38-4C79-9E2F-8155E3A4B438" name="Speed Limit" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_229F80E4-2892-494C-B70D-683ABF2345F6" name="Actual Speed" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id="_2D4F30EE-21A6-4A78-A524-A5C238D433AE" name="tFine" isCollection="false"> <dmn:itemComponent id="_B9F70BC7-1995-4F51-B949-1AB65538B405" name="Amount" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_F49085D6-8F08-4463-9A1A-EF6B57635DBD" name="Points" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:inputData id="_1929CBD5-40E0-442D-B909-49CEDE0101DC" name="Violation"> <dmn:variable id="_C16CF9B1-5FAB-48A0-95E0-5FCD661E0406" name="Violation" typeRef="tViolation"/> </dmn:inputData> <dmn:decision id="_4055D956-1C47-479C-B3F4-BAEB61F1C929" name="Fine"> <dmn:variable id="_8C1EAC83-F251-4D94-8A9E-B03ACF6849CD" name="Fine" typeRef="tFine"/> <dmn:informationRequirement id="_800A3BBB-90A3-4D9D-BA5E-A311DED0134F"> <dmn:requiredInput href="#_1929CBD5-40E0-442D-B909-49CEDE0101DC"/> </dmn:informationRequirement> </dmn:decision> <dmn:inputData id="_1F9350D7-146D-46F1-85D8-15B5B68AF22A" name="Driver"> <dmn:variable id="_A80F16DF-0DB4-43A2-B041-32900B1A3F3D" name="Driver" typeRef="tDriver"/> </dmn:inputData> <dmn:decision id="_8A408366-D8E9-4626-ABF3-5F69AA01F880" name="Should the driver be suspended?"> <dmn:question>Should the driver be suspended due to points on his license?</dmn:question> <dmn:allowedAnswers>"Yes", "No"</dmn:allowedAnswers> <dmn:variable id="_40387B66-5D00-48C8-BB90-E83EE3332C72" name="Should the driver be suspended?" typeRef="string"/> <dmn:informationRequirement id="_982211B1-5246-49CD-BE85-3211F71253CF"> <dmn:requiredInput href="#_1F9350D7-146D-46F1-85D8-15B5B68AF22A"/> </dmn:informationRequirement> <dmn:informationRequirement id="_AEC4AA5F-50C3-4FED-A0C2-261F90290731"> <dmn:requiredDecision href="#_4055D956-1C47-479C-B3F4-BAEB61F1C929"/> </dmn:informationRequirement> </dmn:decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <di:extension/> <dmndi:DMNShape id="dmnshape-_1929CBD5-40E0-442D-B909-49CEDE0101DC" dmnElementRef="_1929CBD5-40E0-442D-B909-49CEDE0101DC" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="708" y="350" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id="dmnshape-_4055D956-1C47-479C-B3F4-BAEB61F1C929" dmnElementRef="_4055D956-1C47-479C-B3F4-BAEB61F1C929" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="709" y="210" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id="dmnshape-_1F9350D7-146D-46F1-85D8-15B5B68AF22A" dmnElementRef="_1F9350D7-146D-46F1-85D8-15B5B68AF22A" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="369" y="344" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id="dmnshape-_8A408366-D8E9-4626-ABF3-5F69AA01F880" dmnElementRef="_8A408366-D8E9-4626-ABF3-5F69AA01F880" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="534" y="83" width="133" height="63"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNEdge id="dmnedge-_800A3BBB-90A3-4D9D-BA5E-A311DED0134F" dmnElementRef="_800A3BBB-90A3-4D9D-BA5E-A311DED0134F"> <di:waypoint x="758" y="375"/> <di:waypoint x="759" y="235"/> </dmndi:DMNEdge> <dmndi:DMNEdge id="dmnedge-_982211B1-5246-49CD-BE85-3211F71253CF" dmnElementRef="_982211B1-5246-49CD-BE85-3211F71253CF"> <di:waypoint x="419" y="369"/> <di:waypoint x="600.5" y="114.5"/> </dmndi:DMNEdge> <dmndi:DMNEdge id="dmnedge-_AEC4AA5F-50C3-4FED-A0C2-261F90290731" dmnElementRef="_AEC4AA5F-50C3-4FED-A0C2-261F90290731"> <di:waypoint x="759" y="235"/> <di:waypoint x="600.5" y="114.5"/> </dmndi:DMNEdge> </dmndi:DMNDiagram> </dmndi:DMNDI> Evaluate a specified DMN model in a specified container POST /server/containers/{containerId}/dmn/models/{modelname} Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation Example curl request Example POST request body with input data { "Driver": { "Points": 15 }, "Violation": { "Date": "2021-04-08", "Type": "speed", "Actual Speed": 135, "Speed Limit": 100 } } Example response (JSON) { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 135, "Code": null, "Date": "2021-04-08" }, "Driver": { "Points": 15, "State": null, "City": null, "Age": null, "Name": null }, "Fine": { "Points": 7, "Amount": 1000 }, "Should the driver be suspended?": "Yes" } Evaluate a specified decision service within a specified DMN model in a container POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName} For this endpoint, the request body must contain all the requirements of the decision service. The response is the resulting DMN context of the decision service, including the decision values, the original input values, and all other parametric DRG components in serialized form. For example, a business knowledge model is available in string-serialized form in its signature. If the decision service is composed of a single-output decision, the response is the resulting value of that specific decision. This behavior provides an equivalent value at the API level of a specification feature when invoking the decision service in the model itself. As a result, you can, for example, interact with a DMN decision service from web applications. Figure 7.1. Example TrafficViolationDecisionService decision service with single-output decision Figure 7.2. Example TrafficViolationDecisionService decision service with multiple-output decision Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService Example POST request body with input data { "Driver": { "Points": 2 }, "Violation": { "Type": "speed", "Actual Speed": 120, "Speed Limit": 100 } } Example curl request Example response for single-output decision (JSON) "No" Example response for multiple-output decision (JSON) { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 120 }, "Driver": { "Points": 2 }, "Fine": { "Points": 3, "Amount": 500 }, "Should the driver be suspended?": "No" } Evaluate a specified DMN model in a specified container and return a DMNResult response POST /server/containers/{containerId}/dmn/models/{modelname}/dmnresult Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/dmnresult Example POST request body with input data { "Driver": { "Points": 2 }, "Violation": { "Type": "speed", "Actual Speed": 120, "Speed Limit": 100 } } Example curl request Example response (JSON) { "namespace": "https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF", "modelName": "Traffic Violation", "dmnContext": { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 120, "Code": null, "Date": null }, "Driver": { "Points": 2, "State": null, "City": null, "Age": null, "Name": null }, "Fine": { "Points": 3, "Amount": 500 }, "Should the driver be suspended?": "No" }, "messages": [], "decisionResults": [ { "decisionId": "_4055D956-1C47-479C-B3F4-BAEB61F1C929", "decisionName": "Fine", "result": { "Points": 3, "Amount": 500 }, "messages": [], "evaluationStatus": "SUCCEEDED" }, { "decisionId": "_8A408366-D8E9-4626-ABF3-5F69AA01F880", "decisionName": "Should the driver be suspended?", "result": "No", "messages": [], "evaluationStatus": "SUCCEEDED" } ] } Evaluate a specified decision service within a DMN model in a specified container and return a DMNResult response POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName}/dmnresult Example REST endpoint http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService/dmnresult Example POST request body with input data { "Driver": { "Points": 2 }, "Violation": { "Type": "speed", "Actual Speed": 120, "Speed Limit": 100 } } Example curl request Example response (JSON) { "namespace": "https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF", "modelName": "Traffic Violation", "dmnContext": { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 120, "Code": null, "Date": null }, "Driver": { "Points": 2, "State": null, "City": null, "Age": null, "Name": null }, "Should the driver be suspended?": "No" }, "messages": [], "decisionResults": [ { "decisionId": "_8A408366-D8E9-4626-ABF3-5F69AA01F880", "decisionName": "Should the driver be suspended?", "result": "No", "messages": [], "evaluationStatus": "SUCCEEDED" } ] } | [
"mvn clean install",
"<!-- Required for the DMN runtime API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( \"org.acme\", \"my-kjar\", \"1.0.0\" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );",
"KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();",
"DMNRuntime dmnRuntime = KieRuntimeFactory.of(kieContainer.getKieBase()).get(DMNRuntime.class); String namespace = \"http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a\"; String modelName = \"dmn-movieticket-ageclassification\"; DMNModel dmnModel = dmnRuntime.getModel(namespace, modelName);",
"DMNContext dmnContext = dmnRuntime.newContext(); 1 for (Integer age : Arrays.asList(1,12,13,64,65,66)) { dmnContext.set(\"Age\", age); 2 DMNResult dmnResult = dmnRuntime.evaluateAll(dmnModel, dmnContext); 3 for (DMNDecisionResult dr : dmnResult.getDecisionResults()) { 4 log.info(\"Age: \" + age + \", \" + \"Decision: '\" + dr.getDecisionName() + \"', \" + \"Result: \" + dr.getResult()); } }",
"Age 1 Decision 'AgeClassification' : Child Age 12 Decision 'AgeClassification' : Child Age 13 Decision 'AgeClassification' : Adult Age 64 Decision 'AgeClassification' : Adult Age 65 Decision 'AgeClassification' : Senior Age 66 Decision 'AgeClassification' : Senior",
"-Dorg.kie.dmn.compiler.execmodel=true",
"mvn clean install",
"<!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServicesConfiguration conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); 1 conf.setMarshallingFormat(MarshallingFormat.JSON); 2 KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(conf);",
"DMNServicesClient dmnClient = kieServicesClient.getServicesClient(DMNServicesClient.class );",
"for (Integer age : Arrays.asList(1,12,13,64,65,66)) { DMNContext dmnContext = dmnClient.newContext(); 1 dmnContext.set(\"Age\", age); 2 ServiceResponse<DMNResult> serverResp = 3 dmnClient.evaluateAll(USDkieContainerId, USDmodelNamespace, USDmodelName, dmnContext); DMNResult dmnResult = serverResp.getResult(); 4 for (DMNDecisionResult dr : dmnResult.getDecisionResults()) { log.info(\"Age: \" + age + \", \" + \"Decision: '\" + dr.getDecisionName() + \"', \" + \"Result: \" + dr.getResult()); } }",
"@javax.xml.bind.annotation.XmlAccessorType(javax.xml.bind.annotation.XmlAccessType.FIELD) public class Person implements java.io.Serializable { static final long serialVersionUID = 1L; private java.lang.String id; private java.lang.String name; @javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter(org.kie.internal.jaxb.LocalDateXmlAdapter.class) private java.time.LocalDate dojoining; public Person() { } public java.lang.String getId() { return this.id; } public void setId(java.lang.String id) { this.id = id; } public java.lang.String getName() { return this.name; } public void setName(java.lang.String name) { this.name = name; } public java.time.LocalDate getDojoining() { return this.dojoining; } public void setDojoining(java.time.LocalDate dojoining) { this.dojoining = dojoining; } public Person(java.lang.String id, java.lang.String name, java.time.LocalDate dojoining) { this.id = id; this.name = name; this.dojoining = dojoining; } }",
"mvn clean install",
"curl -u username:password <request>",
"curl -H \"Authorization: bearer USDTOKEN\" <request>",
"curl -H \"accept: application/json\" -H \"content-type: application/json\"",
"curl -H \"accept: application/xml\" -H \"content-type: application/xml\"",
"curl -u krisv:krisv -H \"accept: application/xml\" -X GET \"http://localhost:8080/kie-server/services/rest/server/containers/MovieDMNContainer/dmn\"",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <response type=\"SUCCESS\" msg=\"OK models successfully retrieved from container 'MovieDMNContainer'\"> <dmn-model-info-list> <model> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <model-id>_99</model-id> <decisions> <dmn-decision-info> <decision-id>_3</decision-id> <decision-name>AgeClassification</decision-name> </dmn-decision-info> </decisions> </model> </dmn-model-info-list> </response>",
"{ \"type\" : \"SUCCESS\", \"msg\" : \"OK models successfully retrieved from container 'MovieDMNContainer'\", \"result\" : { \"dmn-model-info-list\" : { \"models\" : [ { \"model-namespace\" : \"http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a\", \"model-name\" : \"dmn-movieticket-ageclassification\", \"model-id\" : \"_99\", \"decisions\" : [ { \"decision-id\" : \"_3\", \"decision-name\" : \"AgeClassification\" } ] } ] } } }",
"curl -u krisv:krisv -H \"accept: application/json\" -H \"content-type: application/json\" -X POST \"http://localhost:8080/kie-server/services/rest/server/containers/MovieDMNContainer/dmn\" -d \"{ \\\"model-namespace\\\" : \\\"http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a\\\", \\\"model-name\\\" : \\\"dmn-movieticket-ageclassification\\\", \\\"decision-name\\\" : [ ], \\\"decision-id\\\" : [ ], \\\"dmn-context\\\" : {\\\"Age\\\" : 66}}\"",
"{ \"model-namespace\" : \"http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a\", \"model-name\" : \"dmn-movieticket-ageclassification\", \"decision-name\" : [ ], \"decision-id\" : [ ], \"dmn-context\" : {\"Age\" : 66} }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <dmn-evaluation-context> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <dmn-context xsi:type=\"jaxbListWrapper\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Age\"> <value xsi:type=\"xs:int\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">66</value> </element> </dmn-context> </dmn-evaluation-context>",
"{ \"type\" : \"SUCCESS\", \"msg\" : \"OK from container 'MovieDMNContainer'\", \"result\" : { \"dmn-evaluation-result\" : { \"messages\" : [ ], \"model-namespace\" : \"http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a\", \"model-name\" : \"dmn-movieticket-ageclassification\", \"decision-name\" : [ ], \"dmn-context\" : { \"Age\" : 66, \"AgeClassification\" : \"Senior\" }, \"decision-results\" : { \"_3\" : { \"messages\" : [ ], \"decision-id\" : \"_3\", \"decision-name\" : \"AgeClassification\", \"result\" : \"Senior\", \"status\" : \"SUCCEEDED\" } } } } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <response type=\"SUCCESS\" msg=\"OK from container 'MovieDMNContainer'\"> <dmn-evaluation-result> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <dmn-context xsi:type=\"jaxbListWrapper\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Age\"> <value xsi:type=\"xs:int\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">66</value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"AgeClassification\"> <value xsi:type=\"xs:string\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">Senior</value> </element> </dmn-context> <messages/> <decisionResults> <entry> <key>_3</key> <value> <decision-id>_3</decision-id> <decision-name>AgeClassification</decision-name> <result xsi:type=\"xs:string\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">Senior</result> <messages/> <status>SUCCEEDED</status> </value> </entry> </decisionResults> </dmn-evaluation-result> </response>",
"curl -u wbadmin:wbadmin -X GET \"http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic%20Violation\" -H \"accept: application/xml\"",
"<?xml version='1.0' encoding='UTF-8'?> <dmn:definitions xmlns:dmn=\"http://www.omg.org/spec/DMN/20180521/MODEL/\" xmlns=\"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\" xmlns:di=\"http://www.omg.org/spec/DMN/20180521/DI/\" xmlns:kie=\"http://www.drools.org/kie/dmn/1.2\" xmlns:feel=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" xmlns:dmndi=\"http://www.omg.org/spec/DMN/20180521/DMNDI/\" xmlns:dc=\"http://www.omg.org/spec/DMN/20180521/DC/\" id=\"_1C792953-80DB-4B32-99EB-25FBE32BAF9E\" name=\"Traffic Violation\" expressionLanguage=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" typeLanguage=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" namespace=\"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\"> <dmn:extensionElements/> <dmn:itemDefinition id=\"_63824D3F-9173-446D-A940-6A7F0FA056BB\" name=\"tDriver\" isCollection=\"false\"> <dmn:itemComponent id=\"_9DAB5DAA-3B44-4F6D-87F2-95125FB2FEE4\" name=\"Name\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_856BA8FA-EF7B-4DF9-A1EE-E28263CE9955\" name=\"Age\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_FDC2CE03-D465-47C2-A311-98944E8CC23F\" name=\"State\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_D6FD34C4-00DC-4C79-B1BF-BBCF6FC9B6D7\" name=\"City\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_7110FE7E-1A38-4C39-B0EB-AEEF06BA37F4\" name=\"Points\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id=\"_40731093-0642-4588-9183-1660FC55053B\" name=\"tViolation\" isCollection=\"false\"> <dmn:itemComponent id=\"_39E88D9F-AE53-47AD-B3DE-8AB38D4F50B3\" name=\"Code\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_1648EA0A-2463-4B54-A12A-D743A3E3EE7B\" name=\"Date\" isCollection=\"false\"> <dmn:typeRef>date</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_9F129EAA-4E71-4D99-B6D0-84EEC3AC43CC\" name=\"Type\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> <dmn:allowedValues kie:constraintType=\"enumeration\" id=\"_626A8F9C-9DD1-44E0-9568-0F6F8F8BA228\"> <dmn:text>\"speed\", \"parking\", \"driving under the influence\"</dmn:text> </dmn:allowedValues> </dmn:itemComponent> <dmn:itemComponent id=\"_DDD10D6E-BD38-4C79-9E2F-8155E3A4B438\" name=\"Speed Limit\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_229F80E4-2892-494C-B70D-683ABF2345F6\" name=\"Actual Speed\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:itemDefinition id=\"_2D4F30EE-21A6-4A78-A524-A5C238D433AE\" name=\"tFine\" isCollection=\"false\"> <dmn:itemComponent id=\"_B9F70BC7-1995-4F51-B949-1AB65538B405\" name=\"Amount\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_F49085D6-8F08-4463-9A1A-EF6B57635DBD\" name=\"Points\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:inputData id=\"_1929CBD5-40E0-442D-B909-49CEDE0101DC\" name=\"Violation\"> <dmn:variable id=\"_C16CF9B1-5FAB-48A0-95E0-5FCD661E0406\" name=\"Violation\" typeRef=\"tViolation\"/> </dmn:inputData> <dmn:decision id=\"_4055D956-1C47-479C-B3F4-BAEB61F1C929\" name=\"Fine\"> <dmn:variable id=\"_8C1EAC83-F251-4D94-8A9E-B03ACF6849CD\" name=\"Fine\" typeRef=\"tFine\"/> <dmn:informationRequirement id=\"_800A3BBB-90A3-4D9D-BA5E-A311DED0134F\"> <dmn:requiredInput href=\"#_1929CBD5-40E0-442D-B909-49CEDE0101DC\"/> </dmn:informationRequirement> </dmn:decision> <dmn:inputData id=\"_1F9350D7-146D-46F1-85D8-15B5B68AF22A\" name=\"Driver\"> <dmn:variable id=\"_A80F16DF-0DB4-43A2-B041-32900B1A3F3D\" name=\"Driver\" typeRef=\"tDriver\"/> </dmn:inputData> <dmn:decision id=\"_8A408366-D8E9-4626-ABF3-5F69AA01F880\" name=\"Should the driver be suspended?\"> <dmn:question>Should the driver be suspended due to points on his license?</dmn:question> <dmn:allowedAnswers>\"Yes\", \"No\"</dmn:allowedAnswers> <dmn:variable id=\"_40387B66-5D00-48C8-BB90-E83EE3332C72\" name=\"Should the driver be suspended?\" typeRef=\"string\"/> <dmn:informationRequirement id=\"_982211B1-5246-49CD-BE85-3211F71253CF\"> <dmn:requiredInput href=\"#_1F9350D7-146D-46F1-85D8-15B5B68AF22A\"/> </dmn:informationRequirement> <dmn:informationRequirement id=\"_AEC4AA5F-50C3-4FED-A0C2-261F90290731\"> <dmn:requiredDecision href=\"#_4055D956-1C47-479C-B3F4-BAEB61F1C929\"/> </dmn:informationRequirement> </dmn:decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <di:extension/> <dmndi:DMNShape id=\"dmnshape-_1929CBD5-40E0-442D-B909-49CEDE0101DC\" dmnElementRef=\"_1929CBD5-40E0-442D-B909-49CEDE0101DC\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"708\" y=\"350\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id=\"dmnshape-_4055D956-1C47-479C-B3F4-BAEB61F1C929\" dmnElementRef=\"_4055D956-1C47-479C-B3F4-BAEB61F1C929\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"709\" y=\"210\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id=\"dmnshape-_1F9350D7-146D-46F1-85D8-15B5B68AF22A\" dmnElementRef=\"_1F9350D7-146D-46F1-85D8-15B5B68AF22A\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"369\" y=\"344\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id=\"dmnshape-_8A408366-D8E9-4626-ABF3-5F69AA01F880\" dmnElementRef=\"_8A408366-D8E9-4626-ABF3-5F69AA01F880\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"534\" y=\"83\" width=\"133\" height=\"63\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNEdge id=\"dmnedge-_800A3BBB-90A3-4D9D-BA5E-A311DED0134F\" dmnElementRef=\"_800A3BBB-90A3-4D9D-BA5E-A311DED0134F\"> <di:waypoint x=\"758\" y=\"375\"/> <di:waypoint x=\"759\" y=\"235\"/> </dmndi:DMNEdge> <dmndi:DMNEdge id=\"dmnedge-_982211B1-5246-49CD-BE85-3211F71253CF\" dmnElementRef=\"_982211B1-5246-49CD-BE85-3211F71253CF\"> <di:waypoint x=\"419\" y=\"369\"/> <di:waypoint x=\"600.5\" y=\"114.5\"/> </dmndi:DMNEdge> <dmndi:DMNEdge id=\"dmnedge-_AEC4AA5F-50C3-4FED-A0C2-261F90290731\" dmnElementRef=\"_AEC4AA5F-50C3-4FED-A0C2-261F90290731\"> <di:waypoint x=\"759\" y=\"235\"/> <di:waypoint x=\"600.5\" y=\"114.5\"/> </dmndi:DMNEdge> </dmndi:DMNDiagram> </dmndi:DMNDI>",
"curl -u wbadmin:wbadmin-X POST \"http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation\" -H \"accept: application/json\" -H \"Content-Type: application/json\" -d \"{\\\"Driver\\\":{\\\"Points\\\":15},\\\"Violation\\\":{\\\"Date\\\":\\\"2021-04-08\\\",\\\"Type\\\":\\\"speed\\\",\\\"Actual Speed\\\":135,\\\"Speed Limit\\\":100}}\"",
"{ \"Driver\": { \"Points\": 15 }, \"Violation\": { \"Date\": \"2021-04-08\", \"Type\": \"speed\", \"Actual Speed\": 135, \"Speed Limit\": 100 } }",
"{ \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 135, \"Code\": null, \"Date\": \"2021-04-08\" }, \"Driver\": { \"Points\": 15, \"State\": null, \"City\": null, \"Age\": null, \"Name\": null }, \"Fine\": { \"Points\": 7, \"Amount\": 1000 }, \"Should the driver be suspended?\": \"Yes\" }",
"{ \"Driver\": { \"Points\": 2 }, \"Violation\": { \"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100 } }",
"curl -X POST http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService -H 'content-type: application/json' -H 'accept: application/json' -d '{\"Driver\": {\"Points\": 2}, \"Violation\": {\"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100}}'",
"\"No\"",
"{ \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 120 }, \"Driver\": { \"Points\": 2 }, \"Fine\": { \"Points\": 3, \"Amount\": 500 }, \"Should the driver be suspended?\": \"No\" }",
"{ \"Driver\": { \"Points\": 2 }, \"Violation\": { \"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100 } }",
"curl -X POST http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/dmnresult -H 'content-type: application/json' -H 'accept: application/json' -d '{\"Driver\": {\"Points\": 2}, \"Violation\": {\"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100}}'",
"{ \"namespace\": \"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\", \"modelName\": \"Traffic Violation\", \"dmnContext\": { \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 120, \"Code\": null, \"Date\": null }, \"Driver\": { \"Points\": 2, \"State\": null, \"City\": null, \"Age\": null, \"Name\": null }, \"Fine\": { \"Points\": 3, \"Amount\": 500 }, \"Should the driver be suspended?\": \"No\" }, \"messages\": [], \"decisionResults\": [ { \"decisionId\": \"_4055D956-1C47-479C-B3F4-BAEB61F1C929\", \"decisionName\": \"Fine\", \"result\": { \"Points\": 3, \"Amount\": 500 }, \"messages\": [], \"evaluationStatus\": \"SUCCEEDED\" }, { \"decisionId\": \"_8A408366-D8E9-4626-ABF3-5F69AA01F880\", \"decisionName\": \"Should the driver be suspended?\", \"result\": \"No\", \"messages\": [], \"evaluationStatus\": \"SUCCEEDED\" } ] }",
"{ \"Driver\": { \"Points\": 2 }, \"Violation\": { \"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100 } }",
"curl -X POST http://localhost:8080/kie-server/services/rest/server/containers/mykjar-project/dmn/models/Traffic Violation/TrafficViolationDecisionService/dmnresult -H 'content-type: application/json' -H 'accept: application/json' -d '{\"Driver\": {\"Points\": 2}, \"Violation\": {\"Type\": \"speed\", \"Actual Speed\": 120, \"Speed Limit\": 100}}'",
"{ \"namespace\": \"https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF\", \"modelName\": \"Traffic Violation\", \"dmnContext\": { \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 120, \"Code\": null, \"Date\": null }, \"Driver\": { \"Points\": 2, \"State\": null, \"City\": null, \"Age\": null, \"Name\": null }, \"Should the driver be suspended?\": \"No\" }, \"messages\": [], \"decisionResults\": [ { \"decisionId\": \"_8A408366-D8E9-4626-ABF3-5F69AA01F880\", \"decisionName\": \"Should the driver be suspended?\", \"result\": \"No\", \"messages\": [], \"evaluationStatus\": \"SUCCEEDED\" } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/dmn-execution-con_dmn-models |
Glossary | Glossary A access control The process of controlling what particular users are allowed to do. For example, access control to servers is typically based on an identity, established by a password or a certificate, and on rules regarding what that entity can do. See also access control list (ACL) . access control instructions (ACI) An access rule that specifies how subjects requesting access are to be identified or what rights are allowed or denied for a particular subject. See access control list (ACL) . access control list (ACL) A collection of access control entries that define a hierarchy of access rules to be evaluated when a server receives a request for access to a particular resource. See access control instructions (ACI) . administrator The person who installs and configures one or more Certificate System managers and sets up privileged users, or agents, for them. See also agent . Advanced Encryption Standard (AES) The Advanced Encryption Standard (AES), like its predecessor Data Encryption Standard (DES), is a FIPS-approved symmetric-key encryption standard. AES was adopted by the US government in 2002. It defines three block ciphers, AES-128, AES-192 and AES-256. The National Institute of Standards and Technology (NIST) defined the AES standard in U.S. FIPS PUB 197. For more information, see http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf . agent A user who belongs to a group authorized to manage agent services for a Certificate System manager. See also Certificate Manager agent , Key Recovery Authority agent . agent services 1. Services that can be administered by a Certificate System agent through HTML pages served by the Certificate System subsystem for which the agent has been assigned the necessary privileges. 2. The HTML pages for administering such services. agent-approved enrollment An enrollment that requires an agent to approve the request before the certificate is issued. APDU Application protocol data unit. A communication unit (analogous to a byte) that is used in communications between a smart card and a smart card reader. attribute value assertion (AVA) An assertion of the form attribute = value , where attribute is a tag, such as o (organization) or uid (user ID), and value is a value such as "Red Hat, Inc." or a login name. AVAs are used to form the distinguished name (DN) that identifies the subject of a certificate, called the subject name of the certificate. audit log A log that records various system events. This log can be signed, providing proof that it was not tampered with, and can only be read by an auditor user. auditor A privileged user who can view the signed audit logs. authentication Confident identification; assurance that a party to some computerized transaction is not an impostor. Authentication typically involves the use of a password, certificate, PIN, or other information to validate identity over a computer network. See also password-based authentication , certificate-based authentication , client authentication , server authentication . authentication module A set of rules (implemented as a Java TM class) for authenticating an end entity, agent, administrator, or any other entity that needs to interact with a Certificate System subsystem. In the case of typical end-user enrollment, after the user has supplied the information requested by the enrollment form, the enrollment servlet uses an authentication module associated with that form to validate the information and authenticate the user's identity. See servlet . authorization Permission to access a resource controlled by a server. Authorization typically takes place after the ACLs associated with a resource have been evaluated by a server. See access control list (ACL) . automated enrollment A way of configuring a Certificate System subsystem that allows automatic authentication for end-entity enrollment, without human intervention. With this form of authentication, a certificate request that completes authentication module processing successfully is automatically approved for profile processing and certificate issuance. B bind DN A user ID, in the form of a distinguished name (DN), used with a password to authenticate to Red Hat Directory Server. C CA certificate A certificate that identifies a certificate authority. See also certificate authority (CA) , subordinate CA , root CA . CA hierarchy A hierarchy of CAs in which a root CA delegates the authority to issue certificates to subordinate CAs. Subordinate CAs can also expand the hierarchy by delegating issuing status to other CAs. See also certificate authority (CA) , subordinate CA , root CA . CA server key The SSL server key of the server providing a CA service. CA signing key The private key that corresponds to the public key in the CA certificate. A CA uses its signing key to sign certificates and CRLs. certificate Digital data, formatted according to the X.509 standard, that specifies the name of an individual, company, or other entity (the subject name of the certificate) and certifies that a public key , which is also included in the certificate, belongs to that entity. A certificate is issued and digitally signed by a certificate authority (CA) . A certificate's validity can be verified by checking the CA's digital signature through public-key cryptography techniques. To be trusted within a public-key infrastructure (PKI) , a certificate must be issued and signed by a CA that is trusted by other entities enrolled in the PKI. certificate authority (CA) A trusted entity that issues a certificate after verifying the identity of the person or entity the certificate is intended to identify. A CA also renews and revokes certificates and generates CRLs. The entity named in the issuer field of a certificate is always a CA. Certificate authorities can be independent third parties or a person or organization using certificate-issuing server software, such as Red Hat Certificate System. certificate chain A hierarchical series of certificates signed by successive certificate authorities. A CA certificate identifies a certificate authority (CA) and is used to sign certificates issued by that authority. A CA certificate can in turn be signed by the CA certificate of a parent CA, and so on up to a root CA . Certificate System allows any end entity to retrieve all the certificates in a certificate chain. certificate extensions An X.509 v3 certificate contains an extensions field that permits any number of additional fields to be added to the certificate. Certificate extensions provide a way of adding information such as alternative subject names and usage restrictions to certificates. A number of standard extensions have been defined by the PKIX working group. certificate fingerprint A one-way hash associated with a certificate. The number is not part of the certificate itself, but is produced by applying a hash function to the contents of the certificate. If the contents of the certificate changes, even by a single character, the same function produces a different number. Certificate fingerprints can therefore be used to verify that certificates have not been tampered with. Certificate Management Message Formats (CMMF) Message formats used to convey certificate requests and revocation requests from end entities to a Certificate Manager and to send a variety of information to end entities. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. CMMF is subsumed by another proposed standard, Certificate Management Messages over Cryptographic Message Syntax (CMC) . For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmmf-02 . Certificate Management Messages over Cryptographic Message Syntax (CMC) Message format used to convey a request for a certificate to a Certificate Manager. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmc-02 . Certificate Manager An independent Certificate System subsystem that acts as a certificate authority. A Certificate Manager instance issues, renews, and revokes certificates, which it can publish along with CRLs to an LDAP directory. It accepts requests from end entities. See certificate authority (CA) . Certificate Manager agent A user who belongs to a group authorized to manage agent services for a Certificate Manager. These services include the ability to access and modify (approve and reject) certificate requests and issue certificates. certificate profile A set of configuration settings that defines a certain type of enrollment. The certificate profile sets policies for a particular type of enrollment along with an authentication method in a certificate profile. Certificate Request Message Format (CRMF) Format used for messages related to management of X.509 certificates. This format is a subset of CMMF. See also Certificate Management Message Formats (CMMF) . For detailed information, see https://tools.ietf.org/html/rfc2511 . certificate revocation list (CRL) As defined by the X.509 standard, a list of revoked certificates by serial number, generated and signed by a certificate authority (CA) . Certificate System See Red Hat Certificate System , Cryptographic Message Syntax (CS) . Certificate System console A console that can be opened for any single Certificate System instance. A Certificate System console allows the Certificate System administrator to control configuration settings for the corresponding Certificate System instance. Certificate System subsystem One of the five Certificate System managers: Certificate Manager , Online Certificate Status Manager, Key Recovery Authority , Token Key Service, or Token Processing System. certificate-based authentication Authentication based on certificates and public-key cryptography. See also password-based authentication . chain of trust See certificate chain . chained CA See linked CA . cipher See cryptographic algorithm . client authentication The process of identifying a client to a server, such as with a name and password or with a certificate and some digitally signed data. See certificate-based authentication , password-based authentication , server authentication . client SSL certificate A certificate used to identify a client to a server using the SSL protocol. See Secure Sockets Layer (SSL) . CMC See Certificate Management Messages over Cryptographic Message Syntax (CMC) . CMC Enrollment Features that allow either signed enrollment or signed revocation requests to be sent to a Certificate Manager using an agent's signing certificate. These requests are then automatically processed by the Certificate Manager. CMMF See Certificate Management Message Formats (CMMF) . CRL See certificate revocation list (CRL) . CRMF See Certificate Request Message Format (CRMF) . cross-certification The exchange of certificates by two CAs in different certification hierarchies, or chains. Cross-certification extends the chain of trust so that it encompasses both hierarchies. See also certificate authority (CA) . cross-pair certificate A certificate issued by one CA to another CA which is then stored by both CAs to form a circle of trust. The two CAs issue certificates to each other, and then store both cross-pair certificates as a certificate pair. cryptographic algorithm A set of rules or directions used to perform cryptographic operations such as encryption and decryption . Cryptographic Message Syntax (CS) The syntax used to digitally sign, digest, authenticate, or encrypt arbitrary messages, such as CMMF. cryptographic module See PKCS #11 module . cryptographic service provider (CSP) A cryptographic module that performs cryptographic services, such as key generation, key storage, and encryption, on behalf of software that uses a standard interface such as that defined by PKCS #11 to request such services. CSP See cryptographic service provider (CSP) . D decryption Unscrambling data that has been encrypted. See encryption . delta CRL A CRL containing a list of those certificates that have been revoked since the last full CRL was issued. digital ID See certificate . digital signature To create a digital signature, the signing software first creates a one-way hash from the data to be signed, such as a newly issued certificate. The one-way hash is then encrypted with the private key of the signer. The resulting digital signature is unique for each piece of data signed. Even a single comma added to a message changes the digital signature for that message. Successful decryption of the digital signature with the signer's public key and comparison with another hash of the same data provides tamper detection . Verification of the certificate chain for the certificate containing the public key provides authentication of the signer. See also nonrepudiation , encryption . distinguished name (DN) A series of AVAs that identify the subject of a certificate. See attribute value assertion (AVA) . distribution points Used for CRLs to define a set of certificates. Each distribution point is defined by a set of certificates that are issued. A CRL can be created for a particular distribution point. dual key pair Two public-private key pairs, four keys altogether, corresponding to two separate certificates. The private key of one pair is used for signing operations, and the public and private keys of the other pair are used for encryption and decryption operations. Each pair corresponds to a separate certificate . See also encryption key , public-key cryptography , signing key . Key Recovery Authority An optional, independent Certificate System subsystem that manages the long-term archival and recovery of RSA encryption keys for end entities. A Certificate Manager can be configured to archive end entities' encryption keys with a Key Recovery Authority before issuing new certificates. The Key Recovery Authority is useful only if end entities are encrypting data, such as sensitive email, that the organization may need to recover someday. It can be used only with end entities that support dual key pairs: two separate key pairs, one for encryption and one for digital signatures. Key Recovery Authority agent A user who belongs to a group authorized to manage agent services for a Key Recovery Authority, including managing the request queue and authorizing recovery operation using HTML-based administration pages. Key Recovery Authority recovery agent One of the m of n people who own portions of the storage key for the Key Recovery Authority . Key Recovery Authority storage key Special key used by the Key Recovery Authority to encrypt the end entity's encryption key after it has been decrypted with the Key Recovery Authority's private transport key. The storage key never leaves the Key Recovery Authority. Key Recovery Authority transport certificate Certifies the public key used by an end entity to encrypt the entity's encryption key for transport to the Key Recovery Authority. The Key Recovery Authority uses the private key corresponding to the certified public key to decrypt the end entity's key before encrypting it with the storage key. E eavesdropping Surreptitious interception of information sent over a network by an entity for which the information is not intended. Elliptic Curve Cryptography (ECC) A cryptographic algorithm which uses elliptic curves to create additive logarithms for the mathematical problems which are the basis of the cryptographic keys. ECC ciphers are more efficient to use than RSA ciphers and, because of their intrinsic complexity, are stronger at smaller bits than RSA ciphers. encryption Scrambling information in a way that disguises its meaning. See decryption . encryption key A private key used for encryption only. An encryption key and its equivalent public key, plus a signing key and its equivalent public key, constitute a dual key pair . end entity In a public-key infrastructure (PKI) , a person, router, server, or other entity that uses a certificate to identify itself. enrollment The process of requesting and receiving an X.509 certificate for use in a public-key infrastructure (PKI) . Also known as registration . extensions field See certificate extensions . F Federal Bridge Certificate Authority (FBCA) A configuration where two CAs form a circle of trust by issuing cross-pair certificates to each other and storing the two cross-pair certificates as a single certificate pair. fingerprint See certificate fingerprint . FIPS PUBS 140 Federal Information Standards Publications (FIPS PUBS) 140 is a US government standard for implementations of cryptographic modules, hardware or software that encrypts and decrypts data or performs other cryptographic operations, such as creating or verifying digital signatures. Many products sold to the US government must comply with one or more of the FIPS standards. See http://www.nist.gov . firewall A system or combination of systems that enforces a boundary between two or more networks. I impersonation The act of posing as the intended recipient of information sent over a network. Impersonation can take two forms: spoofing and misrepresentation . input In the context of the certificate profile feature, it defines the enrollment form for a particular certificate profile. Each input is set, which then dynamically creates the enrollment form from all inputs configured for this enrollment. intermediate CA A CA whose certificate is located between the root CA and the issued certificate in a certificate chain . IP spoofing The forgery of client IP addresses. J JAR file A digital envelope for a compressed collection of files organized according to the Java TM archive (JAR) format . Java TM archive (JAR) format A set of conventions for associating digital signatures, installer scripts, and other information with files in a directory. Java TM Cryptography Architecture (JCA) The API specification and reference developed by Sun Microsystems for cryptographic services. See http://java.sun.com/products/jdk/1.2/docs/guide/security/CryptoSpec.Introduction . Java TM Development Kit (JDK) Software development kit provided by Sun Microsystems for developing applications and applets using the Java TM programming language. Java TM Native Interface (JNI) A standard programming interface that provides binary compatibility across different implementations of the Java TM Virtual Machine (JVM) on a given platform, allowing existing code written in a language such as C or C++ for a single platform to bind to Java TM. See http://java.sun.com/products/jdk/1.2/docs/guide/jni/index.html . Java TM Security Services (JSS) A Java TM interface for controlling security operations performed by Network Security Services (NSS). K KEA See Key Exchange Algorithm (KEA) . key A large number used by a cryptographic algorithm to encrypt or decrypt data. A person's public key , for example, allows other people to encrypt messages intended for that person. The messages must then be decrypted by using the corresponding private key . key exchange A procedure followed by a client and server to determine the symmetric keys they will both use during an SSL session. Key Exchange Algorithm (KEA) An algorithm used for key exchange by the US Government. L Lightweight Directory Access Protocol (LDAP) A directory service protocol designed to run over TCP/IP and across multiple platforms. LDAP is a simplified version of Directory Access Protocol (DAP), used to access X.500 directories. LDAP is under IETF change control and has evolved to meet Internet requirements. linked CA An internally deployed certificate authority (CA) whose certificate is signed by a public, third-party CA. The internal CA acts as the root CA for certificates it issues, and the third- party CA acts as the root CA for certificates issued by other CAs that are linked to the same third-party root CA. Also known as "chained CA" and by other terms used by different public CAs. M manual authentication A way of configuring a Certificate System subsystem that requires human approval of each certificate request. With this form of authentication, a servlet forwards a certificate request to a request queue after successful authentication module processing. An agent with appropriate privileges must then approve each request individually before profile processing and certificate issuance can proceed. MD5 A message digest algorithm that was developed by Ronald Rivest. See also one-way hash . message digest See one-way hash . misrepresentation The presentation of an entity as a person or organization that it is not. For example, a website might pretend to be a furniture store when it is really a site that takes credit-card payments but never sends any goods. Misrepresentation is one form of impersonation . See also spoofing . N Network Security Services (NSS) A set of libraries designed to support cross-platform development of security-enabled communications applications. Applications built using the NSS libraries support the Secure Sockets Layer (SSL) protocol for authentication, tamper detection, and encryption, and the PKCS #11 protocol for cryptographic token interfaces. NSS is also available separately as a software development kit. non-TMS Non-token management system. Refers to a configuration of subsystems (the CA and, optionally, KRA and OCSP) which do not handle smart cards directly. See Also token management system (TMS) . nonrepudiation The inability by the sender of a message to deny having sent the message. A digital signature provides one form of nonrepudiation. O object signing A method of file signing that allows software developers to sign Java code, JavaScript scripts, or any kind of file and allows users to identify the signers and control access by signed code to local system resources. object-signing certificate A certificate whose associated private key is used to sign objects; related to object signing . OCSP Online Certificate Status Protocol. one-way hash 1. A number of fixed-length generated from data of arbitrary length with the aid of a hashing algorithm. The number, also called a message digest, is unique to the hashed data. Any change in the data, even deleting or altering a single character, results in a different value. 2. The content of the hashed data cannot be deduced from the hash. operation The specific operation, such as read or write, that is being allowed or denied in an access control instruction. output In the context of the certificate profile feature, it defines the resulting form from a successful certificate enrollment for a particular certificate profile. Each output is set, which then dynamically creates the form from all outputs configured for this enrollment. P password-based authentication Confident identification by means of a name and password. See also authentication , certificate-based authentication . PKCS #10 The public-key cryptography standard that governs certificate requests. PKCS #11 The public-key cryptography standard that governs cryptographic tokens such as smart cards. PKCS #11 module A driver for a cryptographic device that provides cryptographic services, such as encryption and decryption, through the PKCS #11 interface. A PKCS #11 module, also called a cryptographic module or cryptographic service provider , can be implemented in either hardware or software. A PKCS #11 module always has one or more slots, which may be implemented as physical hardware slots in some form of physical reader, such as for smart cards, or as conceptual slots in software. Each slot for a PKCS #11 module can in turn contain a token, which is the hardware or software device that actually provides cryptographic services and optionally stores certificates and keys. Red Hat provides a built-in PKCS #11 module with Certificate System. PKCS #12 The public-key cryptography standard that governs key portability. PKCS #7 The public-key cryptography standard that governs signing and encryption. private key One of a pair of keys used in public-key cryptography. The private key is kept secret and is used to decrypt data encrypted with the corresponding public key . proof-of-archival (POA) Data signed with the private Key Recovery Authority transport key that contains information about an archived end-entity key, including key serial number, name of the Key Recovery Authority, subject name of the corresponding certificate, and date of archival. The signed proof-of-archival data are the response returned by the Key Recovery Authority to the Certificate Manager after a successful key archival operation. See also Key Recovery Authority transport certificate . public key One of a pair of keys used in public-key cryptography. The public key is distributed freely and published as part of a certificate . It is typically used to encrypt data sent to the public key's owner, who then decrypts the data with the corresponding private key . public-key cryptography A set of well-established techniques and standards that allow an entity to verify its identity electronically or to sign and encrypt electronic data. Two keys are involved, a public key and a private key. A public key is published as part of a certificate, which associates that key with a particular identity. The corresponding private key is kept secret. Data encrypted with the public key can be decrypted only with the private key. public-key infrastructure (PKI) The standards and services that facilitate the use of public-key cryptography and X.509 v3 certificates in a networked environment. R RC2, RC4 Cryptographic algorithms developed for RSA Data Security by Rivest. See also cryptographic algorithm . Red Hat Certificate System A highly configurable set of software components and tools for creating, deploying, and managing certificates. Certificate System is comprised of five major subsystems that can be installed in different Certificate System instances in different physical locations: Certificate Manager , Online Certificate Status Manager, Key Recovery Authority , Token Key Service, and Token Processing System. registration See enrollment . root CA The certificate authority (CA) with a self-signed certificate at the top of a certificate chain. See also CA certificate , subordinate CA . RSA algorithm Short for Rivest-Shamir-Adleman, a public-key algorithm for both encryption and authentication. It was developed by Ronald Rivest, Adi Shamir, and Leonard Adleman and introduced in 1978. RSA key exchange A key-exchange algorithm for SSL based on the RSA algorithm. S sandbox A Java TM term for the carefully defined limits within which Java TM code must operate. secure channel A security association between the TPS and the smart card which allows encrypted communciation based on a shared master key generated by the TKS and the smart card APDUs. Secure Sockets Layer (SSL) A protocol that allows mutual authentication between a client and server and the establishment of an authenticated and encrypted connection. SSL runs above TCP/IP and below HTTP, LDAP, IMAP, NNTP, and other high-level network protocols. security domain A centralized repository or inventory of PKI subsystems. Its primary purpose is to facilitate the installation and configuration of new PKI services by automatically establishing trusted relationships between subsystems. self tests A feature that tests a Certificate System instance both when the instance starts up and on-demand. server authentication The process of identifying a server to a client. See also client authentication . server SSL certificate A certificate used to identify a server to a client using the Secure Sockets Layer (SSL) protocol. servlet Java TM code that handles a particular kind of interaction with end entities on behalf of a Certificate System subsystem. For example, certificate enrollment, revocation, and key recovery requests are each handled by separate servlets. SHA Secure Hash Algorithm, a hash function used by the US government. signature algorithm A cryptographic algorithm used to create digital signatures. Certificate System supports the MD5 and SHA signing algorithms. See also cryptographic algorithm , digital signature . signed audit log See audit log . signing certificate A certificate whose public key corresponds to a private key used to create digital signatures. For example, a Certificate Manager must have a signing certificate whose public key corresponds to the private key it uses to sign the certificates it issues. signing key A private key used for signing only. A signing key and its equivalent public key, plus an encryption key and its equivalent public key, constitute a dual key pair . single sign-on 1. In Certificate System, a password that simplifies the way to sign on to Red Hat Certificate System by storing the passwords for the internal database and tokens. Each time a user logs on, he is required to enter this single password. 2. The ability for a user to log in once to a single computer and be authenticated automatically by a variety of servers within a network. Partial single sign-on solutions can take many forms, including mechanisms for automatically tracking passwords used with different servers. Certificates support single sign-on within a public-key infrastructure (PKI) . A user can log in once to a local client's private-key database and, as long as the client software is running, rely on certificate-based authentication to access each server within an organization that the user is allowed to access. slot The portion of a PKCS #11 module , implemented in either hardware or software, that contains a token . smart card A small device that contains a microprocessor and stores cryptographic information, such as keys and certificates, and performs cryptographic operations. Smart cards implement some or all of the PKCS #11 interface. spoofing Pretending to be someone else. For example, a person can pretend to have the email address [email protected] , or a computer can identify itself as a site called www.redhat.com when it is not. Spoofing is one form of impersonation . See also misrepresentation . SSL See Secure Sockets Layer (SSL) . subject The entity identified by a certificate . In particular, the subject field of a certificate contains a subject name that uniquely describes the certified entity. subject name A distinguished name (DN) that uniquely describes the subject of a certificate . subordinate CA A certificate authority whose certificate is signed by another subordinate CA or by the root CA. See CA certificate , root CA . symmetric encryption An encryption method that uses the same cryptographic key to encrypt and decrypt a given message. T tamper detection A mechanism ensuring that data received in electronic form entirely corresponds with the original version of the same data. token A hardware or software device that is associated with a slot in a PKCS #11 module . It provides cryptographic services and optionally stores certificates and keys. token key service (TKS) A subsystem in the token management system which derives specific, separate keys for every smart card based on the smart card APDUs and other shared information, like the token CUID. token management system (TMS) The interrelated subsystems - CA, TKS, TPS, and, optionally, the KRA - which are used to manage certificates on smart cards (tokens). token processing system (TPS) A subsystem which interacts directly the Enterprise Security Client and smart cards to manage the keys and certificates on those smart cards. tree hierarchy The hierarchical structure of an LDAP directory. trust Confident reliance on a person or other entity. In a public-key infrastructure (PKI) , trust refers to the relationship between the user of a certificate and the certificate authority (CA) that issued the certificate. If a CA is trusted, then valid certificates issued by that CA can be trusted. V virtual private network (VPN) A way of connecting geographically distant divisions of an enterprise. The VPN allows the divisions to communicate over an encrypted channel, allowing authenticated, confidential transactions that would normally be restricted to a private network. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Glossary |
Chapter 5. Configuring Visual Studio Code - Open Source ("Code - OSS") | Chapter 5. Configuring Visual Studio Code - Open Source ("Code - OSS") Learn how to configure Visual Studio Code - Open Source ("Code - OSS"). Section 5.1, "Configuring single and multiroot workspaces" 5.1. Configuring single and multiroot workspaces With the multi-root workspace feature, you can work with multiple project folders in the same workspace. This is useful when you are working on several related projects at once, such as product documentation and product code repositories. Tip See What is a VS Code "workspace" for better understanding and authoring the workspace files. Note The workspace is set to open in multi-root mode by default. Once workspace is started, the /projects/.code-workspace workspace file is generated. The workspace file will contain all the projects described in the devfile. { "folders": [ { "name": "project-1", "path": "/projects/project-1" }, { "name": "project-2", "path": "/projects/project-2" } ] } If the workspace file already exist, it will be updated and all missing projects will be taken from the devfile. If you remove a project from the devfile, it will be left in the workspace file. You can change the default behavior and provide your own workspace file or switch to a single-root workspace. Procedure Provide your own workspace file. Put a workspace file with the name .code-workspace into the root of your repository. After workspace creation, the Visual Studio Code - Open Source ("Code - OSS") will use the workspace file as it is. { "folders": [ { "name": "project-name", "path": "." } ] } Important Be careful when creating a workspace file. In case of errors, an empty Visual Studio Code - Open Source ("Code - OSS") will be opened instead. Important If you have several projects, the workspace file will be taken from the first project. If the workspace file does not exist in the first project, a new one will be created and placed in the /projects directory. Specify alternative workspace file. Define the VSCODE_DEFAULT_WORKSPACE environment variable in your devfile and specify the right location to the workspace file. env: - name: VSCODE_DEFAULT_WORKSPACE value: "/projects/project-name/workspace-file" Open a workspace in a single-root mode. Define VSCODE_DEFAULT_WORKSPACE environment variable and set it to the root. env: - name: VSCODE_DEFAULT_WORKSPACE value: "/" | [
"{ \"folders\": [ { \"name\": \"project-1\", \"path\": \"/projects/project-1\" }, { \"name\": \"project-2\", \"path\": \"/projects/project-2\" } ] }",
"{ \"folders\": [ { \"name\": \"project-name\", \"path\": \".\" } ] }",
"env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/projects/project-name/workspace-file\"",
"env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/\""
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/administration_guide/configuring-visual-studio-code |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/rhdg-downloads_datagrid |
8.3.2. Nessus | 8.3.2. Nessus Nessus is a full-service security scanner. The plug-in architecture of Nessus allows users to customize it for their systems and networks. As with any scanner, Nessus is only as good as the signature database it relies upon. Fortunately, Nessus is frequently updated and features full reporting, host scanning, and real-time vulnerability searches. Remember that there could be false positives and false negatives, even in a tool as powerful and as frequently updated as Nessus. Note Nessus is not included with Red Hat Enterprise Linux and is not supported. It has been included in this document as a reference to users who may be interested in using this popular application. For more information about Nessus, refer to the official website at the following URL: http://www.nessus.org/ | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-vuln-tools-nessus |
Chapter 1. Introduction to the Ceph File System | Chapter 1. Introduction to the Ceph File System As a storage administrator, you can gain an understanding of the features, system components, and limitations to manage a Ceph File System (CephFS) environment. 1.1. Ceph File System features and enhancements The Ceph File System (CephFS) is a file system compatible with POSIX standards that is built on top of Ceph's distributed object store, called RADOS (Reliable Autonomic Distributed Object Storage). CephFS provides file access to a Red Hat Ceph Storage cluster, and uses the POSIX semantics wherever possible. For example, in contrast to many other common network file systems like NFS, CephFS maintains strong cache coherency across clients. The goal is for processes using the file system to behave the same when they are on different hosts as when they are on the same host. However, in some cases, CephFS diverges from the strict POSIX semantics. The Ceph File System has the following features and enhancements: Scalability The Ceph File System is highly scalable due to horizontal scaling of metadata servers and direct client reads and writes with individual OSD nodes. Shared File System The Ceph File System is a shared file system so multiple clients can work on the same file system at once. Multiple File Systems You can have multiple file systems active on one storage cluster. Each CephFS has its own set of pools and its own set of Metadata Server (MDS) ranks. When deploying multiple file systems this requires more running MDS daemons. This can increase metadata throughput, but also increases operational costs. You can also limit client access to certain file systems. High Availability The Ceph File System provides a cluster of Ceph Metadata Servers (MDS). One is active and others are in standby mode. If the active MDS terminates unexpectedly, one of the standby MDS becomes active. As a result, client mounts continue working through a server failure. This behavior makes the Ceph File System highly available. In addition, you can configure multiple active metadata servers. Configurable File and Directory Layouts The Ceph File System allows users to configure file and directory layouts to use multiple pools, pool namespaces, and file striping modes across objects. POSIX Access Control Lists (ACL) The Ceph File System supports the POSIX Access Control Lists (ACL). ACLs are enabled by default with the Ceph File Systems mounted as kernel clients with kernel version kernel-3.10.0-327.18.2.el7 or newer. To use an ACL with the Ceph File Systems mounted as FUSE clients, you must enable them. Client Quotas The Ceph File System supports setting quotas on any directory in a system. The quota can restrict the number of bytes or the number of files stored beneath that point in the directory hierarchy. CephFS client quotas are enabled by default. Important CephFS EC pools are for archival purpose only. Additional Resources See the Management of MDS service using the Ceph Orchestrator section in the Operations Guide to install Ceph Metadata servers. See the Deployment of the Ceph File System section in the File System Guide to create Ceph File Systems. 1.2. Ceph File System components The Ceph File System has two primary components: Clients The CephFS clients perform I/O operations on behalf of applications using CephFS, such as ceph-fuse for FUSE clients and kcephfs for kernel clients. CephFS clients send metadata requests to an active Metadata Server. In return, the CephFS client learns of the file metadata, and can begin safely caching both metadata and file data. Metadata Servers (MDS) The MDS does the following: Provides metadata to CephFS clients. Manages metadata related to files stored on the Ceph File System. Coordinates access to the shared Red Hat Ceph Storage cluster. Caches hot metadata to reduce requests to the backing metadata pool store. Manages the CephFS clients' caches to maintain cache coherence. Replicates hot metadata between active MDS. Coalesces metadata mutations to a compact journal with regular flushes to the backing metadata pool. CephFS requires at least one Metadata Server daemon ( ceph-mds ) to run. The diagram below shows the component layers of the Ceph File System. The bottom layer represents the underlying core storage cluster components: Ceph OSDs ( ceph-osd ) where the Ceph File System data and metadata are stored. Ceph Metadata Servers ( ceph-mds ) that manages Ceph File System metadata. Ceph Monitors ( ceph-mon ) that manages the master copy of the cluster map. The Ceph Storage protocol layer represents the Ceph native librados library for interacting with the core storage cluster. The CephFS library layer includes the CephFS libcephfs library that works on top of librados and represents the Ceph File System. The top layer represents two types of Ceph clients that can access the Ceph File Systems. The diagram below shows more details on how the Ceph File System components interact with each other. Additional Resources See the Management of MDS service using the Ceph Orchestrator section in the File System Guide to install Ceph Metadata servers. See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide to create Ceph File Systems. 1.3. Ceph File System and SELinux Starting with Red Hat Enterprise Linux 8.3 and Red Hat Ceph Storage 4.2, support for using Security-Enhanced Linux (SELinux) on Ceph File Systems (CephFS) environments is available. You can now set any SELinux file type with CephFS, along with assigning a particular SELinux type on individual files. This support applies to the Ceph File System Metadata Server (MDS), the CephFS File System in User Space (FUSE) clients, and the CephFS kernel clients. Additional Resources See the Using SELinux on Red Hat Enterprise Linux 8 for more information about SELinux. 1.4. Ceph File System limitations and the POSIX standards The Ceph File System diverges from the strict POSIX semantics in the following ways: If a client's attempt to write a file fails, the write operations are not necessarily atomic. That is, the client might call the write() system call on a file opened with the O_SYNC flag with an 8MB buffer and then terminates unexpectedly and the write operation can be only partially applied. Almost all file systems, even local file systems, have this behavior. In situations when the write operations occur simultaneously, a write operation that exceeds object boundaries is not necessarily atomic. For example, writer A writes "aa|aa" and writer B writes "bb|bb" simultaneously, where "|" is the object boundary, and "aa|bb" is written rather than the proper "aa|aa" or "bb|bb" . POSIX includes the telldir() and seekdir() system calls that allow you to obtain the current directory offset and seek back to it. Because CephFS can fragment directories at any time, it is difficult to return a stable integer offset for a directory. As such, calling the seekdir() system call to a non-zero offset might often work but is not guaranteed to do so. Calling seekdir() to offset 0 will always work. This is equivalent to the rewinddir() system call. Sparse files propagate incorrectly to the st_blocks field of the stat() system call. CephFS does not explicitly track parts of a file that are allocated or written to, because the st_blocks field is always populated by the quotient of file size divided by block size. This behavior causes utilities, such as du , to overestimate used space. When the mmap() system call maps a file into memory on multiple hosts, write operations are not coherently propagated to caches of other hosts. That is, if a page is cached on host A, and then updated on host B, host A page is not coherently invalidated. CephFS clients present a hidden .snap directory that is used to access, create, delete, and rename snapshots. Although this directory is excluded from the readdir() system call, any process that tries to create a file or directory with the same name returns an error. The name of this hidden directory can be changed at mount time with the -o snapdirname=.<new_name> option or by using the client_snapdir configuration option. Additional Resources See the Management of MDS service using the Ceph Orchestrator section in the File System Guide to install Ceph Metadata servers. See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide to create Ceph File Systems. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/introduction-to-the-ceph-file-system |
Chapter 4. Example Script | Chapter 4. Example Script | [
"#!/usr/bin/env python from __future__ import print_function import sys import requests from datetime import datetime, timedelta API_HOST = 'https://access.redhat.com/product-life-cycles/api/v1' def get_data(query): full_query = API_HOST + query r = requests.get(full_query) if r.status_code != 200: print('ERROR: Invalid request; returned {} for the following ' 'query:\\n{}'.format(r.status_code, full_query)) sys.exit(1) if not r.json(): print('No data returned with the following query:') print(full_query) sys.exit(0) return r.json() Get RHEL and Openshift Container Platform 4 life cycle data endpoint = '/products' params = 'name=Red Hat Enterprise Linux,Openshift Container Platform 4' data = get_data(endpoint + '?' + params) products = data['data'] for product in products: print(product) Get RHEL and Openshift Container Platform 4 life cycle data using legacy JSON endpoint endpoint = '/plccapi/lifecycle.json' params = 'products=Red Hat Enterprise Linux,Openshift Container Platform 4' data = get_data(endpoint + '?' + params) for product in data: print(product) print('-----')"
] | https://docs.redhat.com/en/documentation/red_hat_product_life_cycle_data_api/1.0/html/red_hat_product_life_cycle_data_api/example_script |
Chapter 62. Salesforce Delete Sink | Chapter 62. Salesforce Delete Sink Removes an object from Salesforce. The body received must be a JSON containing two keys: sObjectId and sObjectName. Example body: { "sObjectId": "XXXXX0", "sObjectName": "Contact" } 62.1. Configuration Options The following table summarizes the configuration options available for the salesforce-delete-sink Kamelet: Property Name Description Type Default Example clientId * Consumer Key The Salesforce application consumer key string clientSecret * Consumer Secret The Salesforce application consumer secret string password * Password The Salesforce user password string userName * Username The Salesforce username string loginUrl Login URL The Salesforce instance login URL string "https://login.salesforce.com" Note Fields marked with an asterisk (*) are mandatory. 62.2. Dependencies At runtime, the salesforce-delete-sink Kamelet relies upon the presence of the following dependencies: camel:salesforce camel:kamelet camel:core camel:jsonpath 62.3. Usage This section describes how you can use the salesforce-delete-sink . 62.3.1. Knative Sink You can use the salesforce-delete-sink Kamelet as a Knative sink by binding it to a Knative object. salesforce-delete-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 62.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 62.3.1.2. Procedure for using the cluster CLI Save the salesforce-delete-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-delete-sink-binding.yaml 62.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 62.3.2. Kafka Sink You can use the salesforce-delete-sink Kamelet as a Kafka sink by binding it to a Kafka topic. salesforce-delete-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 62.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 62.3.2.2. Procedure for using the cluster CLI Save the salesforce-delete-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-delete-sink-binding.yaml 62.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 62.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/salesforce-delete-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"",
"apply -f salesforce-delete-sink-binding.yaml",
"kamel bind channel:mychannel salesforce-delete-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"",
"apply -f salesforce-delete-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/salesforce-sink-delete |
Chapter 2. Viewing subscription status | Chapter 2. Viewing subscription status The status tiles on the Subscription Inventory page show the number of subscriptions in your inventory that are active, already expired, expiring soon, or scheduled to become active on a future date. Active The subscription term has started based on the terms of the contract. You can use the subscription. Expired The subscription term has ended based on the terms of the contract. You must renew the subscription to activate it. Expiring soon The subscription is scheduled to expire within 30 days. You must renew the subscription before the end date to keep it active. Future dated The subscription is scheduled to become active within 30 days. You can begin using the subscription on the start date. Subscriptions can have more than one status. For example, an active subscription that is scheduled to expire within 30 days will be counted on both the Active and Expiring soon tiles. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/viewing_and_managing_your_subscription_inventory_on_the_hybrid_cloud_console/proc-viewing-sub-status-tiles |
Chapter 12. Configuring additional devices in an IBM Z or LinuxONE environment | Chapter 12. Configuring additional devices in an IBM Z or LinuxONE environment After installing OpenShift Container Platform, you can configure additional devices for your cluster in an IBM Z or LinuxONE environment, which is installed with z/VM. The following devices can be configured: Fibre Channel Protocol (FCP) host FCP LUN DASD qeth You can configure devices by adding udev rules using the Machine Config Operator (MCO) or you can configure devices manually. Note The procedures described here apply only to z/VM installations. If you have installed your cluster with RHEL KVM on IBM Z or LinuxONE infrastructure, no additional configuration is needed inside the KVM guest after the devices were added to the KVM guests. However, both in z/VM and RHEL KVM environments the steps to configure the Local Storage Operator and Kubernetes NMState Operator need to be applied. Additional resources Post-installation machine configuration tasks 12.1. Configuring additional devices using the Machine Config Operator (MCO) Tasks in this section describe how to use features of the Machine Config Operator (MCO) to configure additional devices in an IBM Z or LinuxONE environment. Configuring devices with the MCO is persistent but only allows specific configurations for compute nodes. MCO does not allow control plane nodes to have different configurations. Prerequisites You are logged in to the cluster as a user with administrative privileges. The device must be available to the z/VM guest. The device is already attached. The device is not included in the cio_ignore list, which can be set in the kernel parameters. You have created a MachineConfig object file with the following YAML: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: "" 12.1.1. Configuring a Fibre Channel Protocol (FCP) host The following is an example of how to configure an FCP host adapter with N_Port Identifier Virtualization (NPIV) by adding a udev rule. Procedure Take the following sample udev rule 441-zfcp-host-0.0.8000.rules : ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.8000", DRIVER=="zfcp", GOTO="cfg_zfcp_host_0.0.8000" ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="zfcp", TEST=="[ccw/0.0.8000]", GOTO="cfg_zfcp_host_0.0.8000" GOTO="end_zfcp_host_0.0.8000" LABEL="cfg_zfcp_host_0.0.8000" ATTR{[ccw/0.0.8000]online}="1" LABEL="end_zfcp_host_0.0.8000" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. 12.1.2. Configuring an FCP LUN The following is an example of how to configure an FCP LUN by adding a udev rule. You can add new FCP LUNs or add additional paths to LUNs that are already configured with multipathing. Procedure Take the following sample udev rule 41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules : ACTION=="add", SUBSYSTEMS=="ccw", KERNELS=="0.0.8000", GOTO="start_zfcp_lun_0.0.8207" GOTO="end_zfcp_lun_0.0.8000" LABEL="start_zfcp_lun_0.0.8000" SUBSYSTEM=="fc_remote_ports", ATTR{port_name}=="0x500507680d760026", GOTO="cfg_fc_0.0.8000_0x500507680d760026" GOTO="end_zfcp_lun_0.0.8000" LABEL="cfg_fc_0.0.8000_0x500507680d760026" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}="0x00bc000000000000" GOTO="end_zfcp_lun_0.0.8000" LABEL="end_zfcp_lun_0.0.8000" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. 12.1.3. Configuring DASD The following is an example of how to configure a DASD device by adding a udev rule. Procedure Take the following sample udev rule 41-dasd-eckd-0.0.4444.rules : ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.4444", DRIVER=="dasd-eckd", GOTO="cfg_dasd_eckd_0.0.4444" ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="dasd-eckd", TEST=="[ccw/0.0.4444]", GOTO="cfg_dasd_eckd_0.0.4444" GOTO="end_dasd_eckd_0.0.4444" LABEL="cfg_dasd_eckd_0.0.4444" ATTR{[ccw/0.0.4444]online}="1" LABEL="end_dasd_eckd_0.0.4444" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. 12.1.4. Configuring qeth The following is an example of how to configure a qeth device by adding a udev rule. Procedure Take the following sample udev rule 41-qeth-0.0.1000.rules : ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1000", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1001", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccw", KERNEL=="0.0.1002", DRIVER=="qeth", GOTO="group_qeth_0.0.1000" ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.1000", DRIVER=="qeth", GOTO="cfg_qeth_0.0.1000" GOTO="end_qeth_0.0.1000" LABEL="group_qeth_0.0.1000" TEST=="[ccwgroup/0.0.1000]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1000]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1001]", GOTO="end_qeth_0.0.1000" TEST!="[ccw/0.0.1002]", GOTO="end_qeth_0.0.1000" ATTR{[drivers/ccwgroup:qeth]group}="0.0.1000,0.0.1001,0.0.1002" GOTO="end_qeth_0.0.1000" LABEL="cfg_qeth_0.0.1000" ATTR{[ccwgroup/0.0.1000]online}="1" LABEL="end_qeth_0.0.1000" Convert the rule to Base64 encoded by running the following command: USD base64 /path/to/file/ Copy the following MCO sample profile into a YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3 1 The role you have defined in the machine config file. 2 The Base64 encoded string that you have generated in the step. 3 The path where the udev rule is located. steps Install and configure the Local Storage Operator Updating node network configuration 12.2. Configuring additional devices manually Tasks in this section describe how to manually configure additional devices in an IBM Z or LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node. Prerequisites You are logged in to the cluster as a user with administrative privileges. The device must be available to the node. In a z/VM environment, the device must be attached to the z/VM guest. Procedure Connect to the node via SSH by running the following command: USD ssh <user>@<node_ip_address> You can also start a debug session to the node by running the following command: USD oc debug node/<node_name> To enable the devices with the chzdev command, enter the following command: USD sudo chzdev -e 0.0.8000 sudo chzdev -e 1000-1002 sude chzdev -e 4444 sudo chzdev -e 0.0.8000:0x500507680d760026:0x00bc000000000000 Additional resources See Persistent device configuration in IBM Documentation. 12.3. RoCE network Cards RoCE (RDMA over Converged Ethernet) network cards do not need to be enabled and their interfaces can be configured with the Kubernetes NMState Operator whenever they are available in the node. For example, RoCE network cards are available if they are attached in a z/VM environment or passed through in a RHEL KVM environment. 12.4. Enabling multipathing for FCP LUNs Tasks in this section describe how to manually configure additional devices in an IBM Z or LinuxONE environment. This configuration method is persistent over node restarts but not OpenShift Container Platform native and you need to redo the steps if you replace the node. Important On IBM Z and LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and LinuxONE . Prerequisites You are logged in to the cluster as a user with administrative privileges. You have configured multiple paths to a LUN with either method explained above. Procedure Connect to the node via SSH by running the following command: USD ssh <user>@<node_ip_address> You can also start a debug session to the node by running the following command: USD oc debug node/<node_name> To enable multipathing, run the following command: USD sudo /sbin/mpathconf --enable To start the multipathd daemon, run the following command: USD sudo multipath Optional: To format your multipath device with fdisk, run the following command: USD sudo fdisk /dev/mapper/mpatha Verification To verify that the devices have been grouped, run the following command: USD sudo multipath -II Example output mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running steps Install and configure the Local Storage Operator Updating node network configuration | [
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: \"\"",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.8000\", DRIVER==\"zfcp\", GOTO=\"cfg_zfcp_host_0.0.8000\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"zfcp\", TEST==\"[ccw/0.0.8000]\", GOTO=\"cfg_zfcp_host_0.0.8000\" GOTO=\"end_zfcp_host_0.0.8000\" LABEL=\"cfg_zfcp_host_0.0.8000\" ATTR{[ccw/0.0.8000]online}=\"1\" LABEL=\"end_zfcp_host_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3",
"ACTION==\"add\", SUBSYSTEMS==\"ccw\", KERNELS==\"0.0.8000\", GOTO=\"start_zfcp_lun_0.0.8207\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"start_zfcp_lun_0.0.8000\" SUBSYSTEM==\"fc_remote_ports\", ATTR{port_name}==\"0x500507680d760026\", GOTO=\"cfg_fc_0.0.8000_0x500507680d760026\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"cfg_fc_0.0.8000_0x500507680d760026\" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}=\"0x00bc000000000000\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"end_zfcp_lun_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.4444\", DRIVER==\"dasd-eckd\", GOTO=\"cfg_dasd_eckd_0.0.4444\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"dasd-eckd\", TEST==\"[ccw/0.0.4444]\", GOTO=\"cfg_dasd_eckd_0.0.4444\" GOTO=\"end_dasd_eckd_0.0.4444\" LABEL=\"cfg_dasd_eckd_0.0.4444\" ATTR{[ccw/0.0.4444]online}=\"1\" LABEL=\"end_dasd_eckd_0.0.4444\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1001\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1002\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccwgroup\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"cfg_qeth_0.0.1000\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"group_qeth_0.0.1000\" TEST==\"[ccwgroup/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1001]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1002]\", GOTO=\"end_qeth_0.0.1000\" ATTR{[drivers/ccwgroup:qeth]group}=\"0.0.1000,0.0.1001,0.0.1002\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"cfg_qeth_0.0.1000\" ATTR{[ccwgroup/0.0.1000]online}=\"1\" LABEL=\"end_qeth_0.0.1000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo chzdev -e 0.0.8000 sudo chzdev -e 1000-1002 sude chzdev -e 4444 sudo chzdev -e 0.0.8000:0x500507680d760026:0x00bc000000000000",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo /sbin/mpathconf --enable",
"sudo multipath",
"sudo fdisk /dev/mapper/mpatha",
"sudo multipath -II",
"mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/post-installation_configuration/post-install-configure-additional-devices-ibmz |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/proc_providing-feedback-on-red-hat-documentation_default |
probe::scheduler.process_free | probe::scheduler.process_free Name probe::scheduler.process_free - Scheduler freeing a data structure for a process Synopsis scheduler.process_free Values name name of the probe point pid PID of the process getting freed priority priority of the process getting freed | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-process-free |
7.306. authd | 7.306. authd 7.306.1. RHBA-2013:1168 - authd bug fix update Updated authd packages that fix one bug are now available for Red Hat Enterprise Linux 6. The authd package contains a small and fast RFC 1413 Ident Protocol daemon with both xinetd server and interactive modes that supports IPv6 and IPv4 as well as the more popular features of pidentd. Bug Fix BZ# 994118 If authd encountered a negative UID when reading a /proc/net/tcp entry then it stopped reading at that point, and failed to identify the connection it was looking for. Consequently, authd returned a "non-existent user" error response. With this update, the handling of negative UID values in authd is modified, and authd correctly reports a valid user. Users of authd are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/authd |
Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service | Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service Red Hat Developer Hub 1.4 Red Hat Customer Content Services | [
"securityContext: fsGroup: 300",
"db-statefulset.yaml: | spec.template.spec deployment.yaml: | spec.template.spec",
"apply -f rhdh-operator-<VERSION>.yaml",
"-n <your_namespace> create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<redhat_user_name> --docker-password=<redhat_password> --docker-email=<email>",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: rhdh-ingress namespace: my-rhdh-project spec: ingressClassName: webapprouting.kubernetes.azure.com rules: - http: paths: - path: / pathType: Prefix backend: service: name: backstage-<your-CR-name> port: name: http-backend",
"-n <your_namespace> apply -f rhdh-ingress.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: https://<app_address> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: \"USD{BACKEND_SECRET}\" baseUrl: https://<app_address> cors: origin: https://<app_address>",
"apiVersion: v1 kind: Secret metadata: name: my-rhdh-secrets stringData: BACKEND_SECRET: \"xxx\"",
"apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: <your-rhdh-cr> spec: application: imagePullSecrets: - rhdh-pull-secret appConfig: configMaps: - name: \"app-config-rhdh\" extraEnvs: secrets: - name: \"my-rhdh-secrets\"",
"-n my-rhdh-project apply -f rhdh.yaml",
"-n my-rhdh-project delete -f rhdh.yaml",
"az aks approuting enable --resource-group <your_ResourceGroup> --name <your_ClusterName>",
"az extension add --upgrade -n aks-preview --allow-preview true",
"get svc nginx --namespace app-routing-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'",
"create namespace <your_namespace>",
"az login [--tenant=<optional_directory_name>]",
"az group create --name <resource_group_name> --location <location>",
"az account list-locations -o table",
"az aks create --resource-group <resource_group_name> --name <cluster_name> --enable-managed-identity --generate-ssh-keys",
"az aks get-credentials --resource-group <resource_group_name> --name <cluster_name>",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"DEPLOYMENT_NAME=<redhat-developer-hub> NAMESPACE=<rhdh> create namespace USD{NAMESPACE} config set-context --current --namespace=USD{NAMESPACE}",
"-n USDNAMESPACE create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<redhat_user_name> --docker-password=<redhat_password> --docker-email=<email>",
"global: host: <app_address> route: enabled: false upstream: ingress: enabled: true className: webapprouting.kubernetes.azure.com host: backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: fsGroup: 3000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true fsGroup: 3000 volumePermissions: enabled: true",
"helm -n USDNAMESPACE install -f values.yaml USDDEPLOYMENT_NAME openshift-helm-charts/redhat-developer-hub --version 1.4.2",
"get deploy USDDEPLOYMENT_NAME -n USDNAMESPACE",
"PASSWORD=USD(kubectl get secret redhat-developer-hub-postgresql -o jsonpath=\"{.data.password}\" | base64 -d) CLUSTER_ROUTER_BASE=USD(kubectl get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\\.//') helm upgrade USDDEPLOYMENT_NAME -i \"https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.4.2/redhat-developer-hub-1.4.2.tgz\" --set global.clusterRouterBase=\"USDCLUSTER_ROUTER_BASE\" --set global.postgresql.auth.password=\"USDPASSWORD\"",
"echo \"https://USDDEPLOYMENT_NAME-USDNAMESPACE.USDCLUSTER_ROUTER_BASE\"",
"helm upgrade USDDEPLOYMENT_NAME -i https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.4.2/redhat-developer-hub-1.4.2.tgz",
"helm -n USDNAMESPACE delete USDDEPLOYMENT_NAME"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/installing_red_hat_developer_hub_on_microsoft_azure_kubernetes_service/index |
Chapter 14. AWS S3 Streaming upload Sink | Chapter 14. AWS S3 Streaming upload Sink Upload data to AWS S3 in streaming upload mode. 14.1. Configuration Options The following table summarizes the configuration options available for the aws-s3-streaming-upload-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS. string bucketNameOrArn * Bucket Name The S3 Bucket name or ARN. string keyName * Key Name Setting the key name for an element in the bucket through endpoint parameter. In Streaming Upload, with the default configuration, this will be the base for the progressive creation of files. string region * AWS Region The AWS region to connect to. string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS. string autoCreateBucket Autocreate Bucket Setting the autocreation of the S3 bucket bucketName. boolean false batchMessageNumber Batch Message Number The number of messages composing a batch in streaming upload mode int 10 batchSize Batch Size The batch size (in bytes) in streaming upload mode int 1000000 namingStrategy Naming Strategy The naming strategy to use in streaming upload mode. There are 2 enums and the value can be one of progressive, random string "progressive" restartingPolicy Restarting Policy The restarting policy to use in streaming upload mode. There are 2 enums and the value can be one of override, lastPart string "lastPart" streamingUploadMode Streaming Upload Mode Setting the Streaming Upload Mode boolean true Note Fields marked with an asterisk (*) are mandatory. 14.2. Dependencies At runtime, the aws-s3-streaming-upload-sink Kamelet relies upon the presence of the following dependencies: camel:aws2-s3 camel:kamelet 14.3. Usage This section describes how you can use the aws-s3-streaming-upload-sink . 14.3.1. Knative Sink You can use the aws-s3-streaming-upload-sink Kamelet as a Knative sink by binding it to a Knative object. aws-s3-streaming-upload-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-streaming-upload-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-streaming-upload-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" keyName: "The Key Name" region: "eu-west-1" secretKey: "The Secret Key" 14.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 14.3.1.2. Procedure for using the cluster CLI Save the aws-s3-streaming-upload-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-s3-streaming-upload-sink-binding.yaml 14.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 14.3.2. Kafka Sink You can use the aws-s3-streaming-upload-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-s3-streaming-upload-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-streaming-upload-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-streaming-upload-sink properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" keyName: "The Key Name" region: "eu-west-1" secretKey: "The Secret Key" 14.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 14.3.2.2. Procedure for using the cluster CLI Save the aws-s3-streaming-upload-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-s3-streaming-upload-sink-binding.yaml 14.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-streaming-upload-sink -p "sink.accessKey=The Access Key" -p "sink.bucketNameOrArn=The Bucket Name" -p "sink.keyName=The Key Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 14.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-s3-streaming-upload-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-streaming-upload-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-streaming-upload-sink properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" keyName: \"The Key Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-s3-streaming-upload-sink-binding.yaml",
"kamel bind channel:mychannel aws-s3-streaming-upload-sink -p \"sink.accessKey=The Access Key\" -p \"sink.bucketNameOrArn=The Bucket Name\" -p \"sink.keyName=The Key Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-streaming-upload-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-streaming-upload-sink properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" keyName: \"The Key Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-s3-streaming-upload-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-s3-streaming-upload-sink -p \"sink.accessKey=The Access Key\" -p \"sink.bucketNameOrArn=The Bucket Name\" -p \"sink.keyName=The Key Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/aws-s3-streaming-upload-sink |
Chapter 1. Service Mesh 3.x | Chapter 1. Service Mesh 3.x 1.1. Service Mesh overview Red Hat OpenShift Service Mesh manages and secures communication between microservices by providing traffic management, advanced routing, and load balancing. Red Hat OpenShift Service Mesh also enhances security through features like mutual TLS, and offers observability with metrics, logging, and tracing to monitor and troubleshoot applications. Note Because Red Hat OpenShift Service Mesh 3.0 releases on a different cadence from OpenShift Container Platform, the Service Mesh documentation is available as a separate documentation set at Red Hat OpenShift Service Mesh . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/service_mesh/service-mesh-3-x |
Release notes for Red Hat build of OpenJDK 17.0.8 | Release notes for Red Hat build of OpenJDK 17.0.8 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.8/index |
Appendix B. Using the command-line interface to install the Ceph software | Appendix B. Using the command-line interface to install the Ceph software As a storage administrator, you can choose to manually install various components of the Red Hat Ceph Storage software. B.1. Installing the Ceph Command Line Interface The Ceph command-line interface (CLI) enables administrators to execute Ceph administrative commands. The CLI is provided by the ceph-common package and includes the following utilities: ceph ceph-authtool ceph-dencoder rados Prerequisites A running Ceph storage cluster, preferably in the active + clean state. Procedure On the client node, enable the Red Hat Ceph Storage 4 Tools repository: On the client node, install the ceph-common package: From the initial monitor node, copy the Ceph configuration file, in this case ceph.conf , and the administration keyring to the client node: Syntax Example Replace <client_host_name> with the host name of the client node. B.2. Manually Installing Red Hat Ceph Storage Important Red Hat does not support or test upgrading manually deployed clusters. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 4. See Chapter 5, Installing Red Hat Ceph Storage using Ansible for details. You can use command-line utilities, such as Yum, to upgrade manually deployed clusters, but Red Hat does not support or test this approach. All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD). Bootstrapping the initial monitor is the first step in deploying a Ceph storage cluster. Ceph monitor deployment also sets important criteria for the entire cluster, such as: The number of replicas for pools The number of placement groups per OSD The heartbeat intervals Any authentication requirement Most of these values are set by default, so it is useful to know about them when setting up the cluster for production. Installing a Ceph storage cluster by using the command line interface involves these steps: Bootstrapping the initial Monitor node Adding an Object Storage Device (OSD) node Monitor Bootstrapping Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data: Unique Identifier The File System Identifier ( fsid ) is a unique identifier for the cluster. The fsid was originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, so fsid is a bit of a misnomer. Monitor Name Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the hostname -s command. Monitor Map Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires: The File System Identifier ( fsid ) The cluster name, or the default cluster name of ceph is used At least one host name and its IP address. Monitor Keyring Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor. Administrator Keyring To use the ceph command-line interface utilities, create the client.admin user and generate its keyring. Also, you must add the client.admin user to the Monitor keyring. The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid , the mon initial members and the mon host settings at a minimum. You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster. To bootstrap the initial Monitor, perform the following steps: Enable the Red Hat Ceph Storage 4 Monitor repository: On your initial Monitor node, install the ceph-mon package as root : As root , create a Ceph configuration file in the /etc/ceph/ directory. As root , generate the unique identifier for your cluster and add the unique identifier to the [global] section of the Ceph configuration file: View the current Ceph configuration file: As root , add the initial Monitor to the Ceph configuration file: Syntax Example As root , add the IP address of the initial Monitor to the Ceph configuration file: Syntax Example Note To use IPv6 addresses, you set the ms bind ipv6 option to true . For details, see the Bind section in the Configuration Guide for Red Hat Ceph Storage 4. As root , create the keyring for the cluster and generate the Monitor secret key: As root , generate an administrator keyring, generate a ceph.client.admin.keyring user and add the user to the keyring: Syntax Example As root , add the ceph.client.admin.keyring key to the ceph.mon.keyring : Generate the Monitor map. Specify using the node name, IP address and the fsid , of the initial Monitor and save it as /tmp/monmap : Syntax Example As root on the initial Monitor node, create a default data directory: Syntax Example As root , populate the initial Monitor daemon with the Monitor map and keyring: Syntax Example View the current Ceph configuration file: For more details on the various Ceph configuration settings, see the Configuration Guide for Red Hat Ceph Storage 4. The following example of a Ceph configuration file lists some of the most common configuration settings: Example As root , create the done file: Syntax Example As root , update the owner and group permissions on the newly created directory and files: Syntax Example Note If the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by glance and cinder respectively. For example: As root , start and enable the ceph-mon process on the initial Monitor node: Syntax Example As root , verify the monitor daemon is running: Syntax Example To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Adding a Monitor section in the Administration Guide for Red Hat Ceph Storage 4. OSD Bootstrapping Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object. The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size and osd pool default min size settings in the Ceph configuration file. For more details, see the OSD Configuration Reference section in the Configuration Guide for Red Hat Ceph Storage 4. After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node. To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node: Enable the Red Hat Ceph Storage 4 OSD repository: As root , install the ceph-osd package on the Ceph OSD node: Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node: Syntax Example Generate the Universally Unique Identifier (UUID) for the OSD: As root , create the OSD instance: Syntax Example Note This command outputs the OSD number identifier needed for subsequent steps. As root , create the default directory for the new OSD: Syntax Example As root , prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk: Syntax Example As root , initialize the OSD data directory: Syntax Example As root , register the OSD authentication key. Syntax Example As root , add the OSD node to the CRUSH map: Syntax Example As root , place the OSD node under the default CRUSH tree: Syntax Example As root , add the OSD disk to the CRUSH map Syntax Example Note You can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Editing a CRUSH map section in the Storage Strategies Guide for Red Hat Ceph Storage 4 for more details. As root , update the owner and group permissions on the newly created directory and files: Syntax Example The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is down and in . The new OSD must be up before it can begin receiving data. As root , enable and start the OSD process: Syntax Example Once you start the OSD daemon, it is up and in . Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command: To view the OSD tree, execute the following command: Example To expand the storage capacity by adding new OSDs to the storage cluster, see the Adding an OSD section in the Administration Guide for Red Hat Ceph Storage 4. B.3. Manually installing Ceph Manager Usually, the Ansible automation utility installs the Ceph Manager daemon ( ceph-mgr ) when you deploy the Red Hat Ceph Storage cluster. However, if you do not use Ansible to manage Red Hat Ceph Storage, you can install Ceph Manager manually. Red Hat recommends to colocate the Ceph Manager and Ceph Monitor daemons on a same node. Prerequisites A working Red Hat Ceph Storage cluster root or sudo access The rhceph-4-mon-for-rhel-8-x86_64-rpms repository enabled Open ports 6800-7300 on the public network if firewall is used Procedure Use the following commands on the node where ceph-mgr will be deployed and as the root user or with the sudo utility. Install the ceph-mgr package: Create the /var/lib/ceph/mgr/ceph- hostname / directory: Replace hostname with the host name of the node where the ceph-mgr daemon will be deployed, for example: In the newly created directory, create an authentication key for the ceph-mgr daemon: Change the owner and group of the /var/lib/ceph/mgr/ directory to ceph:ceph : Enable the ceph-mgr target: Enable and start the ceph-mgr instance: Replace hostname with the host name of the node where the ceph-mgr will be deployed, for example: Verify that the ceph-mgr daemon started successfully: The output will include a line similar to the following one under the services: section: Install more ceph-mgr daemons to serve as standby daemons that become active if the current active daemon fails. Additional resources Requirements for Installing Red Hat Ceph Storage B.4. Manually Installing Ceph Block Device The following procedure shows how to install and mount a thin-provisioned, resizable Ceph Block Device. Important Ceph Block Devices must be deployed on separate nodes from the Ceph Monitor and OSD nodes. Running kernel clients and kernel server daemons on the same node can lead to kernel deadlocks. Prerequisites Ensure to perform the tasks listed in the Section B.1, "Installing the Ceph Command Line Interface" section. If you use Ceph Block Devices as a back end for virtual machines (VMs) that use QEMU, increase the default file descriptor. See the Ceph - VM hangs when transferring large amounts of data to RBD disk Knowledgebase article for details. Procedure Create a Ceph Block Device user named client.rbd with full permissions to files on OSD nodes ( osd 'allow rwx' ) and output the result to a keyring file: Replace <pool_name> with the name of the pool that you want to allow client.rbd to have access to, for example rbd : See the User Management section in the Red Hat Ceph Storage 4 Administration Guide for more information about creating users. Create a block device image: Specify <image_name> , <image_size> , and <pool_name> , for example: Warning The default Ceph configuration includes the following Ceph Block Device features: layering exclusive-lock object-map deep-flatten fast-diff If you use the kernel RBD ( krbd ) client, you may not be able to map the block device image. To work around this problem, disable the unsupported features. Use one of the following options to do so: Disable the unsupported features dynamically: For example: Use the --image-feature layering option with the rbd create command to enable only layering on newly created block device images. Disable the features be default in the Ceph configuration file: This is a known issue, for details see the Known Issues chapter in the Release Notes for Red Hat Ceph Storage 4. All these features work for users that use the user-space RBD client to access the block device images. Map the newly created image to the block device: For example: Use the block device by creating a file system: Specify the pool name and the image name, for example: This action can take a few moments. Mount the newly created file system: For example: Additional Resources The Block Device Guide for Red Hat Ceph Storage 4. B.5. Manually Installing Ceph Object Gateway The Ceph object gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters. Prerequisites A running Ceph storage cluster, preferably in the active + clean state. Perform the tasks listed in Chapter 3, Requirements for Installing Red Hat Ceph Storage . Procedure Enable the Red Hat Ceph Storage 4 Tools repository: On the Object Gateway node, install the ceph-radosgw package: On the initial Monitor node, do the following steps. Update the Ceph configuration file as follows: Where <obj_gw_hostname> is a short host name of the gateway node. To view the short host name, use the hostname -s command. Copy the updated configuration file to the new Object Gateway node and all other nodes in the Ceph storage cluster: Syntax Example Copy the ceph.client.admin.keyring file to the new Object Gateway node: Syntax Example On the Object Gateway node, create the data directory: On the Object Gateway node, add a user and keyring to bootstrap the object gateway: Syntax Example Important When you provide capabilities to the gateway key you must provide the read capability. However, providing the Monitor write capability is optional; if you provide it, the Ceph Object Gateway will be able to create pools automatically. In such a case, ensure to specify a reasonable number of placement groups in a pool. Otherwise, the gateway uses the default number, which is most likely not suitable for your needs. See Ceph Placement Groups (PGs) per Pool Calculator for details. On the Object Gateway node, create the done file: On the Object Gateway node, change the owner and group permissions: On the Object Gateway node, open TCP port 8080: On the Object Gateway node, start and enable the ceph-radosgw process: Syntax Example Once installed, the Ceph Object Gateway automatically creates pools if the write capability is set on the Monitor. See the Pools chapter in the Storage Strategies Guide for details on creating pools manually. Additional Resources The Red Hat Ceph Storage 4 Object Gateway Configuration and Administration Guide | [
"subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms",
"yum install ceph-common",
"scp /etc/ceph/ceph.conf <user_name>@<client_host_name>:/etc/ceph/ scp /etc/ceph/ceph.client.admin.keyring <user_name>@<client_host_name:/etc/ceph/",
"scp /etc/ceph/ceph.conf root@node1:/etc/ceph/ scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/",
"subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms",
"yum install ceph-mon",
"touch /etc/ceph/ceph.conf",
"echo \"[global]\" > /etc/ceph/ceph.conf echo \"fsid = `uuidgen`\" >> /etc/ceph/ceph.conf",
"cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993",
"echo \"mon initial members = <monitor_host_name>[,<monitor_host_name>]\" >> /etc/ceph/ceph.conf",
"echo \"mon initial members = node1\" >> /etc/ceph/ceph.conf",
"echo \"mon host = <ip-address>[,<ip-address>]\" >> /etc/ceph/ceph.conf",
"echo \"mon host = 192.168.0.120\" >> /etc/ceph/ceph.conf",
"ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyring",
"ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'",
"ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' creating /etc/ceph/ceph.client.admin.keyring",
"ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring",
"monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap",
"monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap monmaptool: monmap file /tmp/monmap monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)",
"mkdir /var/lib/ceph/mon/ceph-<monitor_host_name>",
"mkdir /var/lib/ceph/mon/ceph-node1",
"ceph-mon --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring",
"ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1",
"cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120",
"[global] fsid = <cluster-id> mon initial members = <monitor_host_name>[, <monitor_host_name>] mon host = <ip-address>[, <ip-address>] public network = <network>[, <network>] cluster network = <network>[, <network>] auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = <n> osd pool default size = <n> # Write an object n times. osd pool default min size = <n> # Allow writing n copy in a degraded state. osd pool default pg num = <n> osd pool default pgp num = <n> osd crush chooseleaf type = <n>",
"touch /var/lib/ceph/mon/ceph-<monitor_host_name>/done",
"touch /var/lib/ceph/mon/ceph-node1/done",
"chown -R <owner>:<group> <path_to_directory>",
"chown -R ceph:ceph /var/lib/ceph/mon chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown ceph:ceph /etc/ceph/ceph.client.admin.keyring chown ceph:ceph /etc/ceph/ceph.conf chown ceph:ceph /etc/ceph/rbdmap",
"ls -l /etc/ceph/ -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring",
"systemctl enable ceph-mon.target systemctl enable ceph-mon@<monitor_host_name> systemctl start ceph-mon@<monitor_host_name>",
"systemctl enable ceph-mon.target systemctl enable ceph-mon@node1 systemctl start ceph-mon@node1",
"systemctl status ceph-mon@<monitor_host_name>",
"systemctl status ceph-mon@node1 ● [email protected] - Ceph cluster monitor daemon Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (running) since Wed 2018-06-27 11:31:30 PDT; 5min ago Main PID: 1017 (ceph-mon) CGroup: /system.slice/system-ceph\\x2dmon.slice/[email protected] └─1017 /usr/bin/ceph-mon -f --cluster ceph --id node1 --setuser ceph --setgroup ceph Jun 27 11:31:30 node1 systemd[1]: Started Ceph cluster monitor daemon. Jun 27 11:31:30 node1 systemd[1]: Starting Ceph cluster monitor daemon",
"subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms",
"yum install ceph-osd",
"scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>",
"scp root@node1:/etc/ceph/ceph.conf /etc/ceph scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph",
"uuidgen b367c360-b364-4b1d-8fc6-09408a9cda7a",
"ceph osd create <uuid> [<osd_id>]",
"ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a 0",
"mkdir /var/lib/ceph/osd/ceph-<osd_id>",
"mkdir /var/lib/ceph/osd/ceph-0",
"parted <path_to_disk> mklabel gpt parted <path_to_disk> mkpart primary 1 10000 mkfs -t <fstype> <path_to_partition> mount -o noatime <path_to_partition> /var/lib/ceph/osd/ceph-<osd_id> echo \"<path_to_partition> /var/lib/ceph/osd/ceph-<osd_id> xfs defaults,noatime 1 2\" >> /etc/fstab",
"parted /dev/sdb mklabel gpt parted /dev/sdb mkpart primary 1 10000 parted /dev/sdb mkpart primary 10001 15000 mkfs -t xfs /dev/sdb1 mount -o noatime /dev/sdb1 /var/lib/ceph/osd/ceph-0 echo \"/dev/sdb1 /var/lib/ceph/osd/ceph-0 xfs defaults,noatime 1 2\" >> /etc/fstab",
"ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>",
"ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring",
"ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-<osd_id>/keyring",
"ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring added key for osd.0",
"ceph osd crush add-bucket <host_name> host",
"ceph osd crush add-bucket node2 host",
"ceph osd crush move <host_name> root=default",
"ceph osd crush move node2 root=default",
"ceph osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]",
"ceph osd crush add osd.0 1.0 host=node2 add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush map",
"chown -R <owner>:<group> <path_to_directory>",
"chown -R ceph:ceph /var/lib/ceph/osd chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph",
"systemctl enable ceph-osd.target systemctl enable ceph-osd@<osd_id> systemctl start ceph-osd@<osd_id>",
"systemctl enable ceph-osd.target systemctl enable ceph-osd@0 systemctl start ceph-osd@0",
"ceph -w",
"ceph osd tree",
"ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 2 root default -2 2 host node2 0 1 osd.0 up 1 1 -3 1 host node3 1 1 osd.1 up 1 1",
"yum install ceph-mgr",
"mkdir /var/lib/ceph/mgr/ceph- hostname",
"mkdir /var/lib/ceph/mgr/ceph-node1",
"ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring",
"chown -R ceph:ceph /var/lib/ceph/mgr",
"systemctl enable ceph-mgr.target",
"systemctl enable ceph-mgr@ hostname systemctl start ceph-mgr@ hostname",
"systemctl enable ceph-mgr@node1 systemctl start ceph-mgr@node1",
"ceph -s",
"mgr: node1(active)",
"ceph auth get-or-create client.rbd mon 'profile rbd' osd 'profile rbd pool=<pool_name>' -o /etc/ceph/rbd.keyring",
"ceph auth get-or-create client.rbd mon 'allow r' osd 'allow rwx pool=rbd' -o /etc/ceph/rbd.keyring",
"rbd create <image_name> --size <image_size> --pool <pool_name> --name client.rbd --keyring /etc/ceph/rbd.keyring",
"rbd create image1 --size 4G --pool rbd --name client.rbd --keyring /etc/ceph/rbd.keyring",
"rbd feature disable <image_name> <feature_name>",
"rbd feature disable image1 object-map deep-flatten fast-diff",
"rbd_default_features = 1",
"rbd map <image_name> --pool <pool_name> --name client.rbd --keyring /etc/ceph/rbd.keyring",
"rbd map image1 --pool rbd --name client.rbd --keyring /etc/ceph/rbd.keyring",
"mkfs.ext4 /dev/rbd/<pool_name>/<image_name>",
"mkfs.ext4 /dev/rbd/rbd/image1",
"mkdir <mount_directory> mount /dev/rbd/<pool_name>/<image_name> <mount_directory>",
"mkdir /mnt/ceph-block-device mount /dev/rbd/rbd/image1 /mnt/ceph-block-device",
"subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-debug-rpms",
"yum install ceph-radosgw",
"[client.rgw.<obj_gw_hostname>] host = <obj_gw_hostname> rgw frontends = \"civetweb port=80\" rgw dns name = <obj_gw_hostname>.example.com",
"scp /etc/ceph/ceph.conf <user_name>@<target_host_name>:/etc/ceph",
"scp /etc/ceph/ceph.conf root@node1:/etc/ceph/",
"scp /etc/ceph/ceph.client.admin.keyring <user_name>@<target_host_name>:/etc/ceph/",
"scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/",
"mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`",
"ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring",
"ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring",
"touch /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/done",
"chown -R ceph:ceph /var/lib/ceph/radosgw chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph",
"firewall-cmd --zone=public --add-port=8080/tcp firewall-cmd --zone=public --add-port=8080/tcp --permanent",
"systemctl enable ceph-radosgw.target systemctl enable ceph-radosgw@rgw.<rgw_hostname> systemctl start ceph-radosgw@rgw.<rgw_hostname>",
"systemctl enable ceph-radosgw.target systemctl enable [email protected] systemctl start [email protected]"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/using-the-command-line-interface-to-install-the-ceph-software |
Chapter 11. Compiler and Tools | Chapter 11. Compiler and Tools Git cannot be used with HTTP or HTTPS and SSO Git provides the http.delegation configuration variable, which corresponds to the cURL --delegation parameter, for use when delegation of Kerberos tickets is required. However, Git included in Red Hat Enterprise Linux 6 contains irrelevant checks of the version of the libcurl library while required fixes are provided by a different version of libcurl on RHEL 6 systems. As a consqeuence, using Git with Single Sign-On on HTTP or HTTPS connections fails. To work around this problem, use the Git version provided by the rh-git29 Software Collection from Red Hat Software Collections. (BZ# 1430723 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/known_issues_compiler_and_tools |
3.6. Tuned and ktune | 3.6. Tuned and ktune Tuned is a daemon that monitors and collects data on the usage of various system components, and uses that information to dynamically tune system settings as required. It can react to changes in CPU and network use, and adjust settings to improve performance in active devices or reduce power consumption in inactive devices. The accompanying ktune partners with the tuned-adm tool to provide a number of tuning profiles that are pre-configured to enhance performance and reduce power consumption in a number of specific use cases. Edit these profiles or create new profiles to create performance solutions tailored to your environment. The profiles provided as part of tuned-adm include: default The default power-saving profile. This is the most basic power-saving profile. It enables only the disk and CPU plug-ins. Note that this is not the same as turning tuned-adm off, where both tuned and ktune are disabled. latency-performance A server profile for typical latency performance tuning. This profile disables dynamic tuning mechanisms and transparent hugepages. It uses the performance governer for p-states through cpuspeed , and sets the I/O scheduler to deadline . Additionally, in Red Hat Enterprise Linux 6.5 and later, the profile requests a cpu_dma_latency value of 1 . In Red Hat Enterprise Linux 6.4 and earlier, cpu_dma_latency requested a value of 0 . throughput-performance A server profile for typical throughput performance tuning. This profile is recommended if the system does not have enterprise-class storage. throughput-performance disables power saving mechanisms and enables the deadline I/O scheduler. The CPU governor is set to performance . kernel.sched_min_granularity_ns (scheduler minimal preemption granularity) is set to 10 milliseconds, kernel.sched_wakeup_granularity_ns (scheduler wake-up granularity) is set to 15 milliseconds, vm.dirty_ratio (virtual memory dirty ratio) is set to 40%, and transparent huge pages are enabled. enterprise-storage This profile is recommended for enterprise-sized server configurations with enterprise-class storage, including battery-backed controller cache protection and management of on-disk cache. It is the same as the throughput-performance profile, with one addition: file systems are re-mounted with barrier=0 . virtual-guest This profile is optimized for virtual machines. It is based on the enterprise-storage profile, but also decreases the swappiness of virtual memory. This profile is available in Red Hat Enterprise Linux 6.3 and later. virtual-host Based on the enterprise-storage profile, virtual-host decreases the swappiness of virtual memory and enables more aggressive writeback of dirty pages. Non-root and non-boot file systems are mounted with barrier=0 . Additionally, as of Red Hat Enterprise Linux 6.5, the kernel.sched_migration_cost parameter is set to 5 milliseconds. Prior to Red Hat Enterprise Linux 6.5, kernel.sched_migration_cost used the default value of 0.5 milliseconds. This profile is available in Red Hat Enterprise Linux 6.3 and later. Refer to the Red Hat Enterprise Linux 6 Power Management Guide , available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/ , for further information about tuned and ktune . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/main-analyzeperf-tuned |
18.12.10.12. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 | 18.12.10.12. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 Protocol ID: igmp-ipv6, esp-ipv6, ah-ipv6, udplite-ipv6, all-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 18.14. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-igmp-esp-ah-udplite-over-ipv6 |
16.3. Backing Up and Restoring Virtual Machines Using a Backup Storage Domain | 16.3. Backing Up and Restoring Virtual Machines Using a Backup Storage Domain 16.3.1. Backup storage domains explained A backup storage domain is one that you can use specifically for storing and migrating virtual machines and virtual machine templates for the purpose of backing up and restoring for disaster recovery, migration, or any other backup/restore usage model. A backup domain differs from a non-backup domain in that all virtual machines on a backup domain are in a powered-down state. A virtual machine cannot run on a backup domain. You can set any data storage domain to be a backup domain. You can enable or disable this setting by selecting or deselecting a checkbox in the Manage Domain dialog box. You can enable this setting only after all virtual machines on that storage domain are stopped. You cannot start a virtual machine stored on a backup domain. The Manager blocks this and any other operation that might invalidate the backup. However, you can run a virtual machine based on a template stored on a backup domain if the virtual machine's disks are not part of a backup domain. As with other types of storage domains, you can attach or detach backup domains to or from a data center. So, in addition to storing backups, you can use backup domains to migrate virtual machines between data centers. Advantages Some reasons to use a backup domain, rather than an export domain, are listed here: You can have multiple backup storage domains in a data center, as opposed to only one export domain. You can dedicate a backup storage domain to use for backup and disaster recovery. You can transfer a backup of a virtual machine, a template, or a snapshot to a backup storage domain Migrating a large number of virtual machines, templates, or OVF files is significantly faster with backup domains than export domains. A backup domain uses disk space more efficiently than an export domain. Backup domains support both file storage (NFS, Gluster) and block storage (Fiber Channel and iSCSI). This contrasts with export domains, which only support file storage. You can dynamically enable and disable the backup setting for a storage domain, taking into account the restrictions. Restrictions Any virtual machine or template on a _backup domain must have all its disks on that same domain. All virtual machines on a storage domain must be powered down before you can set it to be a backup domain. You cannot run a virtual machine that is stored on a backup domain, because doing so might manipulate the disk's data. A backup domain cannot be the target of memory volumes because memory volumes are only supported for active virtual machines. You cannot preview a virtual machine on a backup domain. Live migration of a virtual machine to a backup domain is not possible. You cannot set a backup domain to be the master domain. You cannot set a Self-hosted engine's domain to be a backup domain. Do not use the default storage domain as a backup domain. 16.3.2. Setting a data storage domain to be a backup domain Prerequisites All disks belonging to a virtual machine or template on the storage domain must be on the same domain. All virtual machines on the domain must be powered down. Procedure In the Administration Portal, select Storage Domains . Create a new storage domain or select an existing storage domain and click Manage Domain . The Manage Domains dialog box opens. Under Advanced Parameters , select the Backup checkbox. The domain is now a backup domain. 16.3.3. Backing up or Restoring a Virtual Machine or Snapshot Using a Backup Domain You can back up a powered down virtual machine or snapshot. You can then store the backup on the same data center and restore it as needed, or migrate it to another data center. Procedure: Backing Up a Virtual Machine Create a backup domain. See Section 16.3.2, "Setting a data storage domain to be a backup domain" . Create a new virtual machine based on the virtual machine you want to back up: To back up a snapshot, first create a virtual machine from a snapshot. See Creating a Virtual Machine from a Snapshot in the Virtual Machine Management Guide . To back up a virtual machine, first clone the virtual machine. See Cloning a Virtual Machine in the Virtual Machine Management Guide . Make sure the clone is powered down before proceeding. Export the new virtual machine to a backup domain. See Exporting a Virtual Machine to a Data Domain in the Virtual Machine Management Guide . Procedure: Restoring a Virtual Machine Make sure that the backup storage domain that stores the virtual machine backup is attached to a data center. Import the virtual machine from the backup domain. See Section 11.7.5, "Importing Virtual Machines from Imported Data Storage Domains" . Related information Section 11.7.2, "Importing storage domains" Section 11.7.3, "Migrating Storage Domains between Data Centers in the Same Environment" Section 11.7.4, "Migrating Storage Domains between Data Centers in Different Environments" Section 11.7.5, "Importing Virtual Machines from Imported Data Storage Domains" | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-backing_up_and_restoring_virtual_machines_using_a_backup_domain |
4.13. Hardening TLS Configuration | 4.13. Hardening TLS Configuration TLS ( Transport Layer Security ) is a cryptographic protocol used to secure network communications. When hardening system security settings by configuring preferred key-exchange protocols , authentication methods , and encryption algorithms , it is necessary to bear in mind that the broader the range of supported clients, the lower the resulting security. Conversely, strict security settings lead to limited compatibility with clients, which can result in some users being locked out of the system. Be sure to target the strictest available configuration and only relax it when it is required for compatibility reasons. Note that the default settings provided by libraries included in Red Hat Enterprise Linux 7 are secure enough for most deployments. The TLS implementations use secure algorithms where possible while not preventing connections from or to legacy clients or servers. Apply the hardened settings described in this section in environments with strict security requirements where legacy clients or servers that do not support secure algorithms or protocols are not expected or allowed to connect. 4.13.1. Choosing Algorithms to Enable There are several components that need to be selected and configured. Each of the following directly influences the robustness of the resulting configuration (and, consequently, the level of support in clients) or the computational demands that the solution has on the system. Protocol Versions The latest version of TLS provides the best security mechanism. Unless you have a compelling reason to include support for older versions of TLS (or even SSL ), allow your systems to negotiate connections using only the latest version of TLS . Do not allow negotiation using SSL version 2 or 3. Both of those versions have serious security vulnerabilities. Only allow negotiation using TLS version 1.0 or higher. The current version of TLS , 1.2, should always be preferred. Note Please note that currently, the security of all versions of TLS depends on the use of TLS extensions, specific ciphers (see below), and other workarounds. All TLS connection peers need to implement secure renegotiation indication ( RFC 5746 ), must not support compression, and must implement mitigating measures for timing attacks against CBC -mode ciphers (the Lucky Thirteen attack). TLS 1.0 clients need to additionally implement record splitting (a workaround against the BEAST attack). TLS 1.2 supports Authenticated Encryption with Associated Data ( AEAD ) mode ciphers like AES-GCM , AES-CCM , or Camellia-GCM , which have no known issues. All the mentioned mitigations are implemented in cryptographic libraries included in Red Hat Enterprise Linux. See Table 4.6, "Protocol Versions" for a quick overview of protocol versions and recommended usage. Table 4.6. Protocol Versions Protocol Version Usage Recommendation SSL v2 Do not use. Has serious security vulnerabilities. SSL v3 Do not use. Has serious security vulnerabilities. TLS 1.0 Use for interoperability purposes where needed. Has known issues that cannot be mitigated in a way that guarantees interoperability, and thus mitigations are not enabled by default. Does not support modern cipher suites. TLS 1.1 Use for interoperability purposes where needed. Has no known issues but relies on protocol fixes that are included in all the TLS implementations in Red Hat Enterprise Linux. Does not support modern cipher suites. TLS 1.2 Recommended version. Supports the modern AEAD cipher suites. Some components in Red Hat Enterprise Linux are configured to use TLS 1.0 even though they provide support for TLS 1.1 or even 1.2 . This is motivated by an attempt to achieve the highest level of interoperability with external services that may not support the latest versions of TLS . Depending on your interoperability requirements, enable the highest available version of TLS . Important SSL v3 is not recommended for use. However, if, despite the fact that it is considered insecure and unsuitable for general use, you absolutely must leave SSL v3 enabled, see Section 4.8, "Using stunnel" for instructions on how to use stunnel to securely encrypt communications even when using services that do not support encryption or are only capable of using obsolete and insecure modes of encryption. Cipher Suites Modern, more secure cipher suites should be preferred to old, insecure ones. Always disable the use of eNULL and aNULL cipher suites, which do not offer any encryption or authentication at all. If at all possible, ciphers suites based on RC4 or HMAC-MD5 , which have serious shortcomings, should also be disabled. The same applies to the so-called export cipher suites, which have been intentionally made weaker, and thus are easy to break. While not immediately insecure, cipher suites that offer less than 128 bits of security should not be considered for their short useful life. Algorithms that use 128 bit of security or more can be expected to be unbreakable for at least several years, and are thus strongly recommended. Note that while 3DES ciphers advertise the use of 168 bits, they actually offer 112 bits of security. Always give preference to cipher suites that support (perfect) forward secrecy ( PFS ), which ensures the confidentiality of encrypted data even in case the server key is compromised. This rules out the fast RSA key exchange, but allows for the use of ECDHE and DHE . Of the two, ECDHE is the faster and therefore the preferred choice. You should also give preference to AEAD ciphers, such as AES-GCM , before CBC -mode ciphers as they are not vulnerable to padding oracle attacks. Additionally, in many cases, AES-GCM is faster than AES in CBC mode, especially when the hardware has cryptographic accelerators for AES . Note also that when using the ECDHE key exchange with ECDSA certificates, the transaction is even faster than pure RSA key exchange. To provide support for legacy clients, you can install two pairs of certificates and keys on a server: one with ECDSA keys (for new clients) and one with RSA keys (for legacy ones). Public Key Length When using RSA keys, always prefer key lengths of at least 3072 bits signed by at least SHA-256, which is sufficiently large for true 128 bits of security. Warning Keep in mind that the security of your system is only as strong as the weakest link in the chain. For example, a strong cipher alone does not guarantee good security. The keys and the certificates are just as important, as well as the hash functions and keys used by the Certification Authority ( CA ) to sign your keys. 4.13.2. Using Implementations of TLS Red Hat Enterprise Linux 7 is distributed with several full-featured implementations of TLS . In this section, the configuration of OpenSSL and GnuTLS is described. See Section 4.13.3, "Configuring Specific Applications" for instructions on how to configure TLS support in individual applications. The available TLS implementations offer support for various cipher suites that define all the elements that come together when establishing and using TLS -secured communications. Use the tools included with the different implementations to list and specify cipher suites that provide the best possible security for your use case while considering the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . The resulting cipher suites can then be used to configure the way individual applications negotiate and secure connections. Important Be sure to check your settings following every update or upgrade of the TLS implementation you use or the applications that utilize that implementation. New versions may introduce new cipher suites that you do not want to have enabled and that your current configuration does not disable. 4.13.2.1. Working with Cipher Suites in OpenSSL OpenSSL is a toolkit and a cryptography library that support the SSL and TLS protocols. On Red Hat Enterprise Linux 7, a configuration file is provided at /etc/pki/tls/openssl.cnf . The format of this configuration file is described in config (1) . See also Section 4.7.9, "Configuring OpenSSL" . To get a list of all cipher suites supported by your installation of OpenSSL , use the openssl command with the ciphers subcommand as follows: Pass other parameters (referred to as cipher strings and keywords in OpenSSL documentation) to the ciphers subcommand to narrow the output. Special keywords can be used to only list suites that satisfy a certain condition. For example, to only list suites that are defined as belonging to the HIGH group, use the following command: See the ciphers (1) manual page for a list of available keywords and cipher strings. To obtain a list of cipher suites that satisfy the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" , use a command similar to the following: The above command omits all insecure ciphers, gives preference to ephemeral elliptic curve Diffie-Hellman key exchange and ECDSA ciphers, and omits RSA key exchange (thus ensuring perfect forward secrecy ). Note that this is a rather strict configuration, and it might be necessary to relax the conditions in real-world scenarios to allow for a compatibility with a broader range of clients. 4.13.2.2. Working with Cipher Suites in GnuTLS GnuTLS is a communications library that implements the SSL and TLS protocols and related technologies. Note The GnuTLS installation on Red Hat Enterprise Linux 7 offers optimal default configuration values that provide sufficient security for the majority of use cases. Unless you need to satisfy special security requirements, it is recommended to use the supplied defaults. Use the gnutls-cli command with the -l (or --list ) option to list all supported cipher suites: To narrow the list of cipher suites displayed by the -l option, pass one or more parameters (referred to as priority strings and keywords in GnuTLS documentation) to the --priority option. See the GnuTLS documentation at http://www.gnutls.org/manual/gnutls.html#Priority-Strings for a list of all available priority strings. For example, issue the following command to get a list of cipher suites that offer at least 128 bits of security: To obtain a list of cipher suites that satisfy the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" , use a command similar to the following: The above command limits the output to ciphers with at least 128 bits of security while giving preference to the stronger ones. It also forbids RSA key exchange and DSS authentication. Note that this is a rather strict configuration, and it might be necessary to relax the conditions in real-world scenarios to allow for a compatibility with a broader range of clients. 4.13.3. Configuring Specific Applications Different applications provide their own configuration mechanisms for TLS . This section describes the TLS -related configuration files employed by the most commonly used server applications and offers examples of typical configurations. Regardless of the configuration you choose to use, always make sure to mandate that your server application enforces server-side cipher order , so that the cipher suite to be used is determined by the order you configure. 4.13.3.1. Configuring the Apache HTTP Server The Apache HTTP Server can use both OpenSSL and NSS libraries for its TLS needs. Depending on your choice of the TLS library, you need to install either the mod_ssl or the mod_nss module (provided by eponymous packages). For example, to install the package that provides the OpenSSL mod_ssl module, issue the following command as root: The mod_ssl package installs the /etc/httpd/conf.d/ssl.conf configuration file, which can be used to modify the TLS -related settings of the Apache HTTP Server . Similarly, the mod_nss package installs the /etc/httpd/conf.d/nss.conf configuration file. Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server , including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf configuration file are described in detail in /usr/share/httpd/manual/mod/mod_ssl.html . Examples of various settings are in /usr/share/httpd/manual/ssl/ssl_howto.html . When modifying the settings in the /etc/httpd/conf.d/ssl.conf configuration file, be sure to consider the following three directives at the minimum: SSLProtocol Use this directive to specify the version of TLS (or SSL ) you want to allow. SSLCipherSuite Use this directive to specify your preferred cipher suite or disable the ones you want to disallow. SSLHonorCipherOrder Uncomment and set this directive to on to ensure that the connecting clients adhere to the order of ciphers you specified. For example: Note that the above configuration is the bare minimum, and it can be hardened significantly by following the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . To configure and use the mod_nss module, modify the /etc/httpd/conf.d/nss.conf configuration file. The mod_nss module is derived from mod_ssl , and as such it shares many features with it, not least the structure of the configuration file, and the directives that are available. Note that the mod_nss directives have a prefix of NSS instead of SSL . See https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html for an overview of information about mod_nss , including a list of mod_ssl configuration directives that are not applicable to mod_nss . 4.13.3.2. Configuring the Dovecot Mail Server To configure your installation of the Dovecot mail server to use TLS , modify the /etc/dovecot/conf.d/10-ssl.conf configuration file. You can find an explanation of some of the basic configuration directives available in that file in /usr/share/doc/dovecot-2.2.10/wiki/SSL.DovecotConfiguration.txt (this help file is installed along with the standard installation of Dovecot ). When modifying the settings in the /etc/dovecot/conf.d/10-ssl.conf configuration file, be sure to consider the following three directives at the minimum: ssl_protocols Use this directive to specify the version of TLS (or SSL ) you want to allow. ssl_cipher_list Use this directive to specify your preferred cipher suites or disable the ones you want to disallow. ssl_prefer_server_ciphers Uncomment and set this directive to yes to ensure that the connecting clients adhere to the order of ciphers you specified. For example: Note that the above configuration is the bare minimum, and it can be hardened significantly by following the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . 4.13.4. Additional Information For more information about TLS configuration and related topics, see the resources listed below. Installed Documentation config (1) - Describes the format of the /etc/ssl/openssl.conf configuration file. ciphers (1) - Includes a list of available OpenSSL keywords and cipher strings. /usr/share/httpd/manual/mod/mod_ssl.html - Contains detailed descriptions of the directives available in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . /usr/share/httpd/manual/ssl/ssl_howto.html - Contains practical examples of real-world settings in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . /usr/share/doc/dovecot-2.2.10/wiki/SSL.DovecotConfiguration.txt - Explains some of the basic configuration directives available in the /etc/dovecot/conf.d/10-ssl.conf configuration file used by the Dovecot mail server. Online Documentation Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services, such as the Apache HTTP Server . http://tools.ietf.org/html/draft-ietf-uta-tls-bcp-00 - Recommendations for secure use of TLS and DTLS . See Also Section A.2.4, "SSL/TLS" provides a concise description of the SSL and TLS protocols. Section 4.7, "Using OpenSSL" describes, among other things, how to use OpenSSL to create and manage keys, generate certificates, and encrypt and decrypt files. | [
"~]USD openssl ciphers -v 'ALL:COMPLEMENTOFALL'",
"~]USD openssl ciphers -v 'HIGH'",
"~]USD openssl ciphers -v 'kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES' | column -t ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1",
"~]USD gnutls-cli -l",
"~]USD gnutls-cli --priority SECURE128 -l",
"~]USD gnutls-cli --priority SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC -l Cipher suites for SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2 TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2 TLS_ECDHE_ECDSA_AES_256_CBC_SHA1 0xc0, 0x0a SSL3.0 TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2 TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2 TLS_ECDHE_ECDSA_AES_128_CBC_SHA1 0xc0, 0x09 SSL3.0 TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2 TLS_ECDHE_RSA_AES_256_CBC_SHA1 0xc0, 0x14 SSL3.0 TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2 TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2 TLS_ECDHE_RSA_AES_128_CBC_SHA1 0xc0, 0x13 SSL3.0 TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2 TLS_DHE_RSA_AES_256_CBC_SHA1 0x00, 0x39 SSL3.0 TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2 TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2 TLS_DHE_RSA_AES_128_CBC_SHA1 0x00, 0x33 SSL3.0 Certificate types: CTYPE-X.509 Protocols: VERS-TLS1.2 Compression: COMP-NULL Elliptic curves: CURVE-SECP384R1, CURVE-SECP521R1, CURVE-SECP256R1 PK-signatures: SIGN-RSA-SHA384, SIGN-ECDSA-SHA384, SIGN-RSA-SHA512, SIGN-ECDSA-SHA512, SIGN-RSA-SHA256, SIGN-DSA-SHA256, SIGN-ECDSA-SHA256",
"~]# yum install mod_ssl",
"SSLProtocol all -SSLv2 -SSLv3 SSLCipherSuite HIGH:!aNULL:!MD5 SSLHonorCipherOrder on",
"ssl_protocols = !SSLv2 !SSLv3 ssl_cipher_list = HIGH:!aNULL:!MD5 ssl_prefer_server_ciphers = yes"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Hardening_TLS_Configuration |
Chapter 7. Troubleshooting | Chapter 7. Troubleshooting 7.1. Verifying node health 7.1.1. Reviewing node status, resource usage, and configuration Review cluster node health status, resource consumption statistics, and node logs. Additionally, query kubelet status on individual nodes. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the name, status, and role for all nodes in the cluster: USD oc get nodes Summarize CPU and memory usage for each node within the cluster: USD oc adm top nodes Summarize CPU and memory usage for a specific node: USD oc adm top node my-node 7.2. Troubleshooting Operator issues Operators are a method of packaging, deploying, and managing an OpenShift Dedicated application. They act like an extension of the software vendor's engineering team, watching over an OpenShift Dedicated environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time. OpenShift Dedicated 4 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO). As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Dedicated web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM). If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis. 7.2.1. Operator subscription condition types Subscriptions can report the following condition types: Table 7.1. Subscription condition types Condition Description CatalogSourcesUnhealthy Some or all of the catalog sources to be used in resolution are unhealthy. InstallPlanMissing An install plan for a subscription is missing. InstallPlanPending An install plan for a subscription is pending installation. InstallPlanFailed An install plan for a subscription has failed. ResolutionFailed The dependency resolution for a subscription has failed. Note Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. Additional resources Catalog health requirements 7.2.2. Viewing Operator subscription status by using the CLI You can view Operator subscription status by using the CLI. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure List Operator subscriptions: USD oc get subs -n <operator_namespace> Use the oc describe command to inspect a Subscription resource: USD oc describe sub <subscription_name> -n <operator_namespace> In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy: Example output Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ... Note Default OpenShift Dedicated cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. 7.2.3. Viewing Operator catalog source status by using the CLI You can view the status of an Operator catalog source by using the CLI. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources: USD oc get catalogsources -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m Use the oc describe command to get more details and status about a catalog source: USD oc describe catalogsource example-catalog -n openshift-marketplace Example output Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ... In the preceding example output, the last observed state is TRANSIENT_FAILURE . This state indicates that there is a problem establishing a connection for the catalog source. List the pods in the namespace where your catalog source was created: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff . This status indicates that there is an issue pulling the catalog source's index image. Use the oc describe command to inspect a pod for more detailed information: USD oc describe pod example-catalog-bwt8z -n openshift-marketplace Example output Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull In the preceding example output, the error messages indicate that the catalog source's index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials. Additional resources Operator Lifecycle Manager concepts and resources Catalog source gRPC documentation: States of Connectivity 7.2.4. Querying Operator pod status You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure List Operators running in the cluster. The output includes Operator version, availability, and up-time information: USD oc get clusteroperators List Operator pods running in the Operator's namespace, plus pod status, restarts, and age: USD oc get pod -n <operator_namespace> Output a detailed Operator pod summary: USD oc describe pod <operator_pod_name> -n <operator_namespace> 7.2.5. Gathering Operator logs If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have the fully qualified domain names of the control plane or control plane machines. Procedure List the Operator pods that are running in the Operator's namespace, plus the pod status, restarts, and age: USD oc get pods -n <operator_namespace> Review logs for an Operator pod: USD oc logs pod/<pod_name> -n <operator_namespace> If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container: USD oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace> If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values. List pods on each control plane node: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods For any Operator pods not showing a Ready status, inspect the pod's status in detail. Replace <operator_pod_id> with the Operator pod's ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id> List containers related to an Operator pod: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id> For any Operator container not showing a Ready status, inspect the container's status in detail. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> Review the logs for any Operator containers not showing a Ready status. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> Note OpenShift Dedicated 4 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Dedicated API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.3. Investigating pod issues OpenShift Dedicated leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Dedicated 4. After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, pods are either removed after exiting or retained so that their logs can be accessed. The first thing to check when pod issues arise is the pod's status. If an explicit pod failure has occurred, observe the pod's error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod's deployment configuration. 7.3.1. Understanding pod error states Pod failures return explicit error states that can be observed in the status field in the output of oc get pods . Pod error states cover image, container, and container network related failures. The following table provides a list of pod error states along with their descriptions. Table 7.2. Pod error states Pod error state Description ErrImagePull Generic image retrieval error. ErrImagePullBackOff Image retrieval failed and is backed off. ErrInvalidImageName The specified image name was invalid. ErrImageInspect Image inspection did not succeed. ErrImageNeverPull PullPolicy is set to NeverPullImage and the target image is not present locally on the host. ErrRegistryUnavailable When attempting to retrieve an image from a registry, an HTTP error was encountered. ErrContainerNotFound The specified container is either not present or not managed by the kubelet, within the declared pod. ErrRunInitContainer Container initialization failed. ErrRunContainer None of the pod's containers started successfully. ErrKillContainer None of the pod's containers were killed successfully. ErrCrashLoopBackOff A container has terminated. The kubelet will not attempt to restart it. ErrVerifyNonRoot A container or image attempted to run with root privileges. ErrCreatePodSandbox Pod sandbox creation did not succeed. ErrConfigPodSandbox Pod sandbox configuration was not obtained. ErrKillPodSandbox A pod sandbox did not stop successfully. ErrSetupNetwork Network initialization failed. ErrTeardownNetwork Network termination failed. 7.3.2. Reviewing pod status You can query pod status and error states. You can also query a pod's associated deployment configuration and review base image availability. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). skopeo is installed. Procedure Switch into a project: USD oc project <project_name> List pods running within the namespace, as well as pod status, error states, restarts, and age: USD oc get pods Determine whether the namespace is managed by a deployment configuration: USD oc status If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference. Inspect the base image referenced in the preceding command's output: USD skopeo inspect docker://<image_reference> If the base image reference is not correct, update the reference in the deployment configuration: USD oc edit deployment/my-deployment When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved: USD oc get pods -w Review events within the namespace for diagnostic information relating to pod failures: USD oc get events 7.3.3. Inspecting pod and container logs You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Query logs for a specific pod: USD oc logs <pod_name> Query logs for a specific container within a pod: USD oc logs <pod_name> -c <container_name> Logs retrieved using the preceding oc logs commands are composed of messages sent to stdout within pods or containers. Inspect logs contained in /var/log/ within a pod. List log files and subdirectories contained in /var/log within a pod: USD oc exec <pod_name> -- ls -alh /var/log Example output total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp Query a specific log file contained in /var/log within a pod: USD oc exec <pod_name> cat /var/log/<path_to_log> Example output 2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO List log files and subdirectories contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> ls /var/log Query a specific log file contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log> 7.3.4. Accessing running pods You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Switch into the project that contains the pod you would like to access. This is necessary because the oc rsh command does not accept the -n namespace option: USD oc project <namespace> Start a remote shell into a pod: USD oc rsh <pod_name> 1 1 If a pod has multiple containers, oc rsh defaults to the first container unless -c <container_name> is specified. Start a remote shell into a specific container within a pod: USD oc rsh -c <container_name> pod/<pod_name> Create a port forwarding session to a port on a pod: USD oc port-forward <pod_name> <host_port>:<pod_port> 1 1 Enter Ctrl+C to cancel the port forwarding session. 7.3.5. Starting debug pods with root access You can start a debug pod with root access, based on a problematic pod's deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Start a debug pod with root access, based on a deployment. Obtain a project's deployment name: USD oc get deployment -n <project_name> Start a debug pod with root privileges, based on the deployment: USD oc debug deployment/my-deployment --as-root -n <project_name> Start a debug pod with root access, based on a deployment configuration. Obtain a project's deployment configuration name: USD oc get deploymentconfigs -n <project_name> Start a debug pod with root privileges, based on the deployment configuration: USD oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name> Note You can append -- <command> to the preceding oc debug commands to run individual commands within a debug pod, instead of running an interactive shell. 7.3.6. Copying files to and from pods and containers You can copy files to and from a pod to test configuration changes or gather diagnostic information. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Copy a file to a pod: USD oc cp <local_path> <pod_name>:/<path> -c <container_name> 1 1 The first container in a pod is selected if the -c option is not specified. Copy a file from a pod: USD oc cp <pod_name>:/<path> -c <container_name> <local_path> 1 1 The first container in a pod is selected if the -c option is not specified. Note For oc cp to function, the tar binary must be available within the container. 7.4. Troubleshooting the Source-to-Image process 7.4.1. Strategies for Source-to-Image troubleshooting Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source. To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages: During the build configuration stage , a build pod is used to create an application container image from a base image and application source code. During the deployment configuration stage , a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds. After the deployment pod has started the application pods , application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a Running state. In this scenario, you can access running application pods to investigate application failures within a pod. When troubleshooting S2I issues, follow this strategy: Monitor build, deployment, and application pod status Determine the stage of the S2I process where the problem occurred Review logs corresponding to the failed stage 7.4.2. Gathering Source-to-Image diagnostic data The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Watch the pod status throughout the S2I process to determine at which stage a failure occurs: USD oc get pods -w 1 1 Use -w to monitor pods for changes until you quit the command using Ctrl+C . Review a failed pod's logs for errors. If the build pod fails , review the build pod's logs: USD oc logs -f pod/<application_name>-<build_number>-build Note Alternatively, you can review the build configuration's logs using oc logs -f bc/<application_name> . The build configuration's logs include the logs from the build pod. If the deployment pod fails , review the deployment pod's logs: USD oc logs -f pod/<application_name>-<build_number>-deploy Note Alternatively, you can review the deployment configuration's logs using oc logs -f dc/<application_name> . This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by running oc logs -f pod/<application_name>-<build_number>-deploy . If an application pod fails, or if an application is not behaving as expected within a running application pod , review the application pod's logs: USD oc logs -f pod/<application_name>-<build_number>-<random_string> 7.4.3. Gathering application diagnostic data to investigate application failures Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies: Review events relating to the application pods. Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Logging framework. Test application functionality interactively and run diagnostic tools in an application container. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure List events relating to a specific application pod. The following example retrieves events for an application pod named my-app-1-akdlg : USD oc describe pod/my-app-1-akdlg Review logs from an application pod: USD oc logs -f pod/my-app-1-akdlg Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout. If an application log can be accessed without root privileges within a pod, concatenate the log file as follows: USD oc exec my-app-1-akdlg -- cat /var/log/my-application.log If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's DeploymentConfig object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation: USD oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log Note You can access an interactive shell with root access within the debug pod if you run oc debug dc/<deployment_configuration> --as-root without appending -- <command> . Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell. Start an interactive shell on the application container: USD oc exec -it my-app-1-akdlg /bin/bash Test application functionality interactively from within the shell. For example, you can run the container's entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process. Run diagnostic binaries available within the container. Note Root privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod's DeploymentConfig object, by running oc debug dc/<deployment_configuration> --as-root . Then, you can run diagnostic binaries as root from within the debug pod. 7.5. Troubleshooting storage issues 7.5.1. Resolving multi-attach errors When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node. However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume. A multi-attach error is reported: Example output Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4 Procedure To resolve the multi-attach issue, use one of the following solutions: Enable multiple attachments by using RWX volumes. For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors. Recover or delete the failed node when using an RWO volume. For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes. If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. USD oc delete pod <old_pod> --force=true --grace-period=0 This command deletes the volumes stuck on shutdown or crashed nodes after six minutes. 7.6. Investigating monitoring issues OpenShift Dedicated includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Dedicated 4, cluster administrators can optionally enable monitoring for user-defined projects. Use these procedures if the following issues occur: Your own metrics are unavailable. Prometheus is consuming a lot of disk space. The KubePersistentVolumeFillingUp alert is firing for Prometheus. 7.6.1. Investigating why user-defined project metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined projects. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels definition in the ServiceMonitor resource configuration matches the label output in the preceding step. The following example queries the prometheus-example-monitor service monitor in the ns1 project: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your endpoint on the Metrics targets page in the OpenShift Dedicated web console UI. Log in to the OpenShift Dedicated web console and navigate to Observe Targets in the Administrator perspective. Locate the metrics endpoint in the list, and review the status of the target in the Status column. If the Status is Down , click the URL for the endpoint to view more information on the Target Details page for that metrics target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug # ... Save the file to apply the changes. The affected prometheus-operator pod is automatically redeployed. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Creating a user-defined workload monitoring config map See Specifying how a service is monitored for details on how to create a service monitor or pod monitor See Getting detailed information about a metrics target 7.6.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges. Check the number of scrape samples that are being collected. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption: By running the following query, you can identify the ten jobs that have the highest number of scrape samples: topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling))) By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour: topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h]))) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts: If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Dedicated project , create a Red Hat support case on the Red Hat Customer Portal . Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a dedicated-admin : Get the Prometheus API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host}) Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Query the TSDB status for Prometheus by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/status/tsdb" Example output "status": "success","data":{"headStats":{"numSeries":507473, "numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010, "maxTime":1712257935346},"seriesCountByMetricName": [{"name":"etcd_request_duration_seconds_bucket","value":51840}, {"name":"apiserver_request_sli_duration_seconds_bucket","value":47718}, ... Additional resources Setting scrape and evaluation intervals and enforced limits for user-defined projects 7.7. Diagnosing OpenShift CLI ( oc ) issues 7.7.1. Understanding OpenShift CLI ( oc ) log levels With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Dedicated projects from a terminal. If oc command-specific issues arise, increase the oc log level to output API request, API response, and curl request details generated by the command. This provides a granular view of a particular oc command's underlying operation, which in turn might provide insight into the nature of a failure. oc log levels range from 1 to 10. The following table provides a list of oc log levels, along with their descriptions. Table 7.3. OpenShift CLI (oc) log levels Log level Description 1 to 5 No additional logging to stderr. 6 Log API requests to stderr. 7 Log API requests and headers to stderr. 8 Log API requests, headers, and body, plus API response headers and body to stderr. 9 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr. 10 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr, in verbose detail. 7.7.2. Specifying OpenShift CLI ( oc ) log levels You can investigate OpenShift CLI ( oc ) issues by increasing the command's log level. The OpenShift Dedicated user's current session token is typically included in logged curl requests where required. You can also obtain the current user's session token manually, for use when testing aspects of an oc command's underlying process step-by-step. Prerequisites Install the OpenShift CLI ( oc ). Procedure Specify the oc log level when running an oc command: USD oc <command> --loglevel <log_level> where: <command> Specifies the command you are running. <log_level> Specifies the log level to apply to the command. To obtain the current user's session token, run the following command: USD oc whoami -t Example output sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6... 7.8. Red Hat managed resources 7.8.1. Overview The following covers all OpenShift Dedicated resources that are managed or protected by the Service Reliability Engineering Platform (SRE-P) Team. Customers should not try to change these resources because doing so can lead to cluster instability. 7.8.2. Hive managed resources The following list displays the OpenShift Dedicated resources managed by OpenShift Hive, the centralized fleet configuration management system. These resources are in addition to the OpenShift Container Platform resources created during installation. OpenShift Hive continually attempts to maintain consistency across all OpenShift Dedicated clusters. Changes to OpenShift Dedicated resources should be made through OpenShift Cluster Manager so that OpenShift Cluster Manager and Hive are synchronized. Contact [email protected] if OpenShift Cluster Manager does not support modifying the resources in question. Example 7.1. List of Hive managed resources Resources: ConfigMap: - namespace: openshift-config name: rosa-brand-logo - namespace: openshift-console name: custom-logo - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-config - namespace: openshift-file-integrity name: fr-aide-conf - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-config - namespace: openshift-monitoring name: cluster-monitoring-config - namespace: openshift-monitoring name: managed-namespaces - namespace: openshift-monitoring name: ocp-namespaces - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter-code - namespace: openshift-monitoring name: sre-dns-latency-exporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-ebs-iops-reporter-code - namespace: openshift-monitoring name: sre-ebs-iops-reporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-stuck-ebs-vols-code - namespace: openshift-monitoring name: sre-stuck-ebs-vols-trusted-ca-bundle - namespace: openshift-security name: osd-audit-policy - namespace: openshift-validation-webhook name: webhook-cert - namespace: openshift name: motd Endpoints: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook Namespace: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-aws-vpce-operator - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-compliance-monkey - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero - name: openshift-monitoring - name: openshift - name: openshift-cluster-version - name: keycloak - name: goalert - name: configure-goalert-operator ReplicationController: - namespace: openshift-monitoring name: sre-ebs-iops-reporter-1 - namespace: openshift-monitoring name: sre-stuck-ebs-vols-1 Secret: - namespace: openshift-authentication name: v4-0-config-user-idp-0-file-data - namespace: openshift-authentication name: v4-0-config-user-template-error - namespace: openshift-authentication name: v4-0-config-user-template-login - namespace: openshift-authentication name: v4-0-config-user-template-provider-selection - namespace: openshift-config name: htpasswd-secret - namespace: openshift-config name: osd-oauth-templates-errors - namespace: openshift-config name: osd-oauth-templates-login - namespace: openshift-config name: osd-oauth-templates-providers - namespace: openshift-config name: rosa-oauth-templates-errors - namespace: openshift-config name: rosa-oauth-templates-login - namespace: openshift-config name: rosa-oauth-templates-providers - namespace: openshift-config name: support - namespace: openshift-config name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-ingress name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-kube-apiserver name: user-serving-cert-000 - namespace: openshift-kube-apiserver name: user-serving-cert-001 - namespace: openshift-monitoring name: dms-secret - namespace: openshift-monitoring name: observatorium-credentials - namespace: openshift-monitoring name: pd-secret - namespace: openshift-scanning name: clam-secrets - namespace: openshift-scanning name: logger-secrets - namespace: openshift-security name: splunk-auth ServiceAccount: - namespace: openshift-backplane-managed-scripts name: osd-backplane - namespace: openshift-backplane-srep name: 6804d07fb268b8285b023bcf65392f0e - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-scanning name: logger-sa - namespace: openshift-scanning name: scanner-sa - namespace: openshift-sre-pruning name: sre-pruner-sa - namespace: openshift-suricata name: suricata-sa - namespace: openshift-validation-webhook name: validation-webhook - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-velero name: velero - namespace: openshift-backplane-srep name: UNIQUE_BACKPLANE_SERVICEACCOUNT_ID Service: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook AddonOperator: - name: addon-operator ValidatingWebhookConfiguration: - name: sre-hiveownership-validation - name: sre-namespace-validation - name: sre-pod-validation - name: sre-prometheusrule-validation - name: sre-regular-user-validation - name: sre-scc-validation - name: sre-techpreviewnoupgrade-validation DaemonSet: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-scanning name: logger - namespace: openshift-scanning name: scanner - namespace: openshift-security name: audit-exporter - namespace: openshift-suricata name: suricata - namespace: openshift-validation-webhook name: validation-webhook DeploymentConfig: - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterRoleBinding: - name: aqua-scanner-binding - name: backplane-cluster-admin - name: backplane-impersonate-cluster-admin - name: bz1980755 - name: configure-alertmanager-operator-prom - name: dedicated-admins-cluster - name: dedicated-admins-registry-cas-cluster - name: logger-clusterrolebinding - name: openshift-backplane-managed-scripts-reader - name: osd-cluster-admin - name: osd-cluster-ready - name: osd-delete-backplane-script-resources - name: osd-delete-ownerrefs-serviceaccounts - name: osd-patch-subscription-source - name: osd-rebalance-infra-nodes - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: splunk-forwarder-operator-clusterrolebinding - name: sre-pod-network-connectivity-check-pruner - name: sre-pruner-buildsdeploys-pruning - name: velero - name: webhook-validation ClusterRole: - name: backplane-cee-readers-cluster - name: backplane-impersonate-cluster-admin - name: backplane-readers-cluster - name: backplane-srep-admins-cluster - name: backplane-srep-admins-project - name: bz1980755 - name: dedicated-admins-aggregate-cluster - name: dedicated-admins-aggregate-project - name: dedicated-admins-cluster - name: dedicated-admins-manage-operators - name: dedicated-admins-project - name: dedicated-admins-registry-cas-cluster - name: dedicated-readers - name: image-scanner - name: logger-clusterrole - name: openshift-backplane-managed-scripts-reader - name: openshift-splunk-forwarder-operator - name: osd-cluster-ready - name: osd-custom-domains-dedicated-admin-cluster - name: osd-delete-backplane-script-resources - name: osd-delete-backplane-serviceaccounts - name: osd-delete-ownerrefs-serviceaccounts - name: osd-get-namespace - name: osd-netnamespaces-dedicated-admin-cluster - name: osd-patch-subscription-source - name: osd-readers-aggregate - name: osd-rebalance-infra-nodes - name: osd-rebalance-infra-nodes-openshift-pod-rebalance - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: sre-allow-read-machine-info - name: sre-pruner-buildsdeploys-cr - name: webhook-validation-cr RoleBinding: - namespace: kube-system name: cloud-ingress-operator-cluster-config-v1-reader - namespace: kube-system name: managed-velero-operator-cluster-config-v1-reader - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-mustgather - namespace: openshift-backplane-managed-scripts name: backplane-srep-mustgather - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-cloud-ingress-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-custom-domains-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-image-registry name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: admin-dedicated-admins - namespace: openshift-logging name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-logging name: openshift-logging-dedicated-admins - namespace: openshift-logging name: openshift-logging:serviceaccounts:dedicated-admin - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: sre-ebs-iops-reporter-read-machine-info - namespace: openshift-machine-api name: sre-stuck-ebs-vols-read-machine-info - namespace: openshift-managed-node-metadata-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-must-gather-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-network-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ocm-agent-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-operators-redhat name: admin-dedicated-admins - namespace: openshift-operators-redhat name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-operators-redhat name: openshift-operators-redhat-dedicated-admins - namespace: openshift-operators-redhat name: openshift-operators-redhat:serviceaccounts:dedicated-admin - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-route-monitor-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-scanning name: scanner-rolebinding - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-splunk-forwarder-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-suricata name: suricata-rolebinding - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-create - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-edit - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-managed-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-velero name: prometheus-k8s Role: - namespace: kube-system name: cluster-config-v1-reader - namespace: kube-system name: cluster-config-v1-reader-cio - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-pcap-collector - namespace: openshift-backplane-managed-scripts name: backplane-srep-pcap-collector - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: dedicated-admins-openshift-logging - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-scanning name: scanner-role - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-suricata name: suricata-role - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-create-cm - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-manage-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: prometheus-k8s CronJob: - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-sre-pruning name: builds-pruner - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-sre-pruning name: deployments-pruner Job: - namespace: openshift-monitoring name: osd-cluster-ready CredentialsRequest: - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-aws - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-gcp - namespace: openshift-monitoring name: sre-ebs-iops-reporter-aws-credentials - namespace: openshift-monitoring name: sre-stuck-ebs-vols-aws-credentials - namespace: openshift-velero name: managed-velero-operator-iam-credentials-aws - namespace: openshift-velero name: managed-velero-operator-iam-credentials-gcp APIScheme: - namespace: openshift-cloud-ingress-operator name: rh-api PublishingStrategy: - namespace: openshift-cloud-ingress-operator name: publishingstrategy ScanSettingBinding: - namespace: openshift-compliance name: fedramp-high-ocp - namespace: openshift-compliance name: fedramp-high-rhcos ScanSetting: - namespace: openshift-compliance name: osd TailoredProfile: - namespace: openshift-compliance name: rhcos4-high-rosa OAuth: - name: cluster EndpointSlice: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics-rhtwg - namespace: openshift-monitoring name: sre-dns-latency-exporter-4cw9r - namespace: openshift-monitoring name: sre-ebs-iops-reporter-6tx5g - namespace: openshift-monitoring name: sre-stuck-ebs-vols-gmdhs - namespace: openshift-scanning name: loggerservice-zprbq - namespace: openshift-security name: audit-exporter-nqfdk - namespace: openshift-validation-webhook name: validation-webhook-97b8t FileIntegrity: - namespace: openshift-file-integrity name: osd-fileintegrity MachineHealthCheck: - namespace: openshift-machine-api name: srep-infra-healthcheck - namespace: openshift-machine-api name: srep-metal-worker-healthcheck - namespace: openshift-machine-api name: srep-worker-healthcheck MachineSet: - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-infra-us-east-1a - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-worker-us-east-1a ContainerRuntimeConfig: - name: custom-crio KubeletConfig: - name: custom-kubelet MachineConfig: - name: 00-master-chrony - name: 00-worker-chrony SubjectPermission: - namespace: openshift-rbac-permissions name: backplane-cee - namespace: openshift-rbac-permissions name: backplane-csa - namespace: openshift-rbac-permissions name: backplane-cse - namespace: openshift-rbac-permissions name: backplane-csm - namespace: openshift-rbac-permissions name: backplane-mobb - namespace: openshift-rbac-permissions name: backplane-srep - namespace: openshift-rbac-permissions name: backplane-tam - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins - namespace: openshift-rbac-permissions name: dedicated-admins-alert-routing-edit - namespace: openshift-rbac-permissions name: dedicated-admins-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins-customer-monitoring - namespace: openshift-rbac-permissions name: osd-delete-backplane-serviceaccounts VeleroInstall: - namespace: openshift-velero name: cluster PrometheusRule: - namespace: openshift-monitoring name: rhmi-sre-cluster-admins - namespace: openshift-monitoring name: rhoam-sre-cluster-admins - namespace: openshift-monitoring name: sre-alertmanager-silences-active - namespace: openshift-monitoring name: sre-alerts-stuck-builds - namespace: openshift-monitoring name: sre-alerts-stuck-volumes - namespace: openshift-monitoring name: sre-cloud-ingress-operator-offline-alerts - namespace: openshift-monitoring name: sre-avo-pendingacceptance - namespace: openshift-monitoring name: sre-configure-alertmanager-operator-offline-alerts - namespace: openshift-monitoring name: sre-control-plane-resizing-alerts - namespace: openshift-monitoring name: sre-dns-alerts - namespace: openshift-monitoring name: sre-ebs-iops-burstbalance - namespace: openshift-monitoring name: sre-elasticsearch-jobs - namespace: openshift-monitoring name: sre-elasticsearch-managed-notification-alerts - namespace: openshift-monitoring name: sre-excessive-memory - namespace: openshift-monitoring name: sre-fr-alerts-low-disk-space - namespace: openshift-monitoring name: sre-haproxy-reload-fail - namespace: openshift-monitoring name: sre-internal-slo-recording-rules - namespace: openshift-monitoring name: sre-kubequotaexceeded - namespace: openshift-monitoring name: sre-leader-election-master-status-alerts - namespace: openshift-monitoring name: sre-managed-kube-apiserver-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-controller-manager-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-scheduler-missing-on-node - namespace: openshift-monitoring name: sre-managed-node-metadata-operator-alerts - namespace: openshift-monitoring name: sre-managed-notification-alerts - namespace: openshift-monitoring name: sre-managed-upgrade-operator-alerts - namespace: openshift-monitoring name: sre-managed-velero-operator-alerts - namespace: openshift-monitoring name: sre-node-unschedulable - namespace: openshift-monitoring name: sre-oauth-server - namespace: openshift-monitoring name: sre-pending-csr-alert - namespace: openshift-monitoring name: sre-proxy-managed-notification-alerts - namespace: openshift-monitoring name: sre-pruning - namespace: openshift-monitoring name: sre-pv - namespace: openshift-monitoring name: sre-router-health - namespace: openshift-monitoring name: sre-runaway-sdn-preventing-container-creation - namespace: openshift-monitoring name: sre-slo-recording-rules - namespace: openshift-monitoring name: sre-telemeter-client - namespace: openshift-monitoring name: sre-telemetry-managed-labels-recording-rules - namespace: openshift-monitoring name: sre-upgrade-send-managed-notification-alerts - namespace: openshift-monitoring name: sre-uptime-sla ServiceMonitor: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterUrlMonitor: - namespace: openshift-route-monitor-operator name: api RouteMonitor: - namespace: openshift-route-monitor-operator name: console NetworkPolicy: - namespace: openshift-deployment-validation-operator name: allow-from-openshift-insights - namespace: openshift-deployment-validation-operator name: allow-from-openshift-olm ManagedNotification: - namespace: openshift-ocm-agent-operator name: sre-elasticsearch-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-proxy-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-upgrade-managed-notifications OcmAgent: - namespace: openshift-ocm-agent-operator name: ocmagent - namespace: openshift-security name: audit-exporter Console: - name: cluster CatalogSource: - namespace: openshift-addon-operator name: addon-operator-catalog - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-registry - namespace: openshift-compliance name: compliance-operator-registry - namespace: openshift-container-security name: container-security-operator-registry - namespace: openshift-custom-domains-operator name: custom-domains-operator-registry - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-catalog - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator-registry - namespace: openshift-file-integrity name: file-integrity-operator-registry - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-catalog - namespace: openshift-monitoring name: configure-alertmanager-operator-registry - namespace: openshift-must-gather-operator name: must-gather-operator-registry - namespace: openshift-observability-operator name: observability-operator-catalog - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-registry - namespace: openshift-osd-metrics name: osd-metrics-exporter-registry - namespace: openshift-rbac-permissions name: rbac-permissions-operator-registry - namespace: openshift-route-monitor-operator name: route-monitor-operator-registry - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-catalog - namespace: openshift-velero name: managed-velero-operator-registry OperatorGroup: - namespace: openshift-addon-operator name: addon-operator-og - namespace: openshift-aqua name: openshift-aqua - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-codeready-workspaces name: openshift-codeready-workspaces - namespace: openshift-compliance name: compliance-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-customer-monitoring name: openshift-customer-monitoring - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-og - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-logging name: openshift-logging - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-og - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator-og - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-og - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-og - namespace: openshift-velero name: managed-velero-operator Subscription: - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-compliance name: compliance-operator-sub - namespace: openshift-container-security name: container-security-operator-sub - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-deployment-validation-operator name: deployment-validation-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator-sub - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: openshift-splunk-forwarder-operator - namespace: openshift-velero name: managed-velero-operator PackageManifest: - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-deployment-validation-operator name: managed-upgrade-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-custom-domains-operator name: managed-node-metadata-operator - namespace: openshift-route-monitor-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: deployment-validation-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-compliance name: compliance-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator Status: - {} Project: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero ClusterResourceQuota: - name: loadbalancer-quota - name: persistent-volume-quota SecurityContextConstraints: - name: osd-scanning-scc - name: osd-suricata-scc - name: pcap-dedicated-admins - name: splunkforwarder SplunkForwarder: - namespace: openshift-security name: splunkforwarder Group: - name: cluster-admins - name: dedicated-admins User: - name: backplane-cluster-admin Backup: - namespace: openshift-velero name: daily-full-backup-20221123112305 - namespace: openshift-velero name: daily-full-backup-20221125042537 - namespace: openshift-velero name: daily-full-backup-20221126010038 - namespace: openshift-velero name: daily-full-backup-20221127010039 - namespace: openshift-velero name: daily-full-backup-20221128010040 - namespace: openshift-velero name: daily-full-backup-20221129050847 - namespace: openshift-velero name: hourly-object-backup-20221128051740 - namespace: openshift-velero name: hourly-object-backup-20221128061740 - namespace: openshift-velero name: hourly-object-backup-20221128071740 - namespace: openshift-velero name: hourly-object-backup-20221128081740 - namespace: openshift-velero name: hourly-object-backup-20221128091740 - namespace: openshift-velero name: hourly-object-backup-20221129050852 - namespace: openshift-velero name: hourly-object-backup-20221129051747 - namespace: openshift-velero name: weekly-full-backup-20221116184315 - namespace: openshift-velero name: weekly-full-backup-20221121033854 - namespace: openshift-velero name: weekly-full-backup-20221128020040 Schedule: - namespace: openshift-velero name: daily-full-backup - namespace: openshift-velero name: hourly-object-backup - namespace: openshift-velero name: weekly-full-backup 7.8.3. OpenShift Dedicated core namespaces OpenShift Dedicated core namespaces are installed by default during cluster installation. Example 7.2. List of core namespaces apiVersion: v1 kind: ConfigMap metadata: name: ocp-namespaces namespace: openshift-monitoring data: managed_namespaces.yaml: | Resources: Namespace: - name: kube-system - name: openshift-apiserver - name: openshift-apiserver-operator - name: openshift-authentication - name: openshift-authentication-operator - name: openshift-cloud-controller-manager - name: openshift-cloud-controller-manager-operator - name: openshift-cloud-credential-operator - name: openshift-cloud-network-config-controller - name: openshift-cluster-api - name: openshift-cluster-csi-drivers - name: openshift-cluster-machine-approver - name: openshift-cluster-node-tuning-operator - name: openshift-cluster-samples-operator - name: openshift-cluster-storage-operator - name: openshift-config - name: openshift-config-managed - name: openshift-config-operator - name: openshift-console - name: openshift-console-operator - name: openshift-console-user-settings - name: openshift-controller-manager - name: openshift-controller-manager-operator - name: openshift-dns - name: openshift-dns-operator - name: openshift-etcd - name: openshift-etcd-operator - name: openshift-host-network - name: openshift-image-registry - name: openshift-ingress - name: openshift-ingress-canary - name: openshift-ingress-operator - name: openshift-insights - name: openshift-kni-infra - name: openshift-kube-apiserver - name: openshift-kube-apiserver-operator - name: openshift-kube-controller-manager - name: openshift-kube-controller-manager-operator - name: openshift-kube-scheduler - name: openshift-kube-scheduler-operator - name: openshift-kube-storage-version-migrator - name: openshift-kube-storage-version-migrator-operator - name: openshift-machine-api - name: openshift-machine-config-operator - name: openshift-marketplace - name: openshift-monitoring - name: openshift-multus - name: openshift-network-diagnostics - name: openshift-network-operator - name: openshift-nutanix-infra - name: openshift-oauth-apiserver - name: openshift-openstack-infra - name: openshift-operator-lifecycle-manager - name: openshift-operators - name: openshift-ovirt-infra - name: openshift-sdn - name: openshift-ovn-kubernetes - name: openshift-platform-operators - name: openshift-route-controller-manager - name: openshift-service-ca - name: openshift-service-ca-operator - name: openshift-user-workload-monitoring - name: openshift-vsphere-infra 7.8.4. OpenShift Dedicated add-on namespaces OpenShift Dedicated add-ons are services available for installation after cluster installation. These additional services include Red Hat OpenShift Dev Spaces, Red Hat OpenShift API Management, and Cluster Logging Operator. Any changes to resources within the following namespaces can be overridden by the add-on during upgrades, which can lead to unsupported configurations for the add-on functionality. Example 7.3. List of add-on managed namespaces addon-namespaces: ocs-converged-dev: openshift-storage managed-api-service-internal: redhat-rhoami-operator codeready-workspaces-operator: codeready-workspaces-operator managed-odh: redhat-ods-operator codeready-workspaces-operator-qe: codeready-workspaces-operator-qe integreatly-operator: redhat-rhmi-operator nvidia-gpu-addon: redhat-nvidia-gpu-addon integreatly-operator-internal: redhat-rhmi-operator rhoams: redhat-rhoam-operator ocs-converged: openshift-storage addon-operator: redhat-addon-operator prow-operator: prow cluster-logging-operator: openshift-logging advanced-cluster-management: redhat-open-cluster-management cert-manager-operator: redhat-cert-manager-operator dba-operator: addon-dba-operator reference-addon: redhat-reference-addon ocm-addon-test-operator: redhat-ocm-addon-test-operator 7.8.5. OpenShift Dedicated validating webhooks OpenShift Dedicated validating webhooks are a set of dynamic admission controls maintained by the OpenShift SRE team. These HTTP callbacks, also known as webhooks, are called for various types of requests to ensure cluster stability. The following list describes the various webhooks with rules containing the registered operations and resources that are controlled. Any attempt to circumvent these validating webhooks could affect the stability and supportability of the cluster. Example 7.4. List of validating webhooks [ { "webhookName": "clusterlogging-validation", "rules": [ { "operations": [ "CREATE", "UPDATE" ], "apiGroups": [ "logging.openshift.io" ], "apiVersions": [ "v1" ], "resources": [ "clusterloggings" ], "scope": "Namespaced" } ], "documentString": "Managed OpenShift Customers may set log retention outside the allowed range of 0-7 days" }, { "webhookName": "clusterrolebindings-validation", "rules": [ { "operations": [ "DELETE" ], "apiGroups": [ "rbac.authorization.k8s.io" ], "apiVersions": [ "v1" ], "resources": [ "clusterrolebindings" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift Customers may not delete the cluster role bindings under the managed namespaces: (^openshift-.*|kube-system)" }, { "webhookName": "customresourcedefinitions-validation", "rules": [ { "operations": [ "CREATE", "UPDATE", "DELETE" ], "apiGroups": [ "apiextensions.k8s.io" ], "apiVersions": [ "*" ], "resources": [ "customresourcedefinitions" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift Customers may not change CustomResourceDefinitions managed by Red Hat." }, { "webhookName": "hiveownership-validation", "rules": [ { "operations": [ "UPDATE", "DELETE" ], "apiGroups": [ "quota.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "clusterresourcequotas" ], "scope": "Cluster" } ], "webhookObjectSelector": { "matchLabels": { "hive.openshift.io/managed": "true" } }, "documentString": "Managed OpenShift customers may not edit certain managed resources. A managed resource has a \"hive.openshift.io/managed\": \"true\" label." }, { "webhookName": "imagecontentpolicies-validation", "rules": [ { "operations": [ "CREATE", "UPDATE" ], "apiGroups": [ "config.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "imagedigestmirrorsets", "imagetagmirrorsets" ], "scope": "Cluster" }, { "operations": [ "CREATE", "UPDATE" ], "apiGroups": [ "operator.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "imagecontentsourcepolicies" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift customers may not create ImageContentSourcePolicy, ImageDigestMirrorSet, or ImageTagMirrorSet resources that configure mirrors that would conflict with system registries (e.g. quay.io, registry.redhat.io, registry.access.redhat.com, etc). For more details, see https://docs.openshift.com/" }, { "webhookName": "ingress-config-validation", "rules": [ { "operations": [ "CREATE", "UPDATE", "DELETE" ], "apiGroups": [ "config.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "ingresses" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift customers may not modify ingress config resources because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring." }, { "webhookName": "ingresscontroller-validation", "rules": [ { "operations": [ "CREATE", "UPDATE" ], "apiGroups": [ "operator.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "ingresscontroller", "ingresscontrollers" ], "scope": "Namespaced" } ], "documentString": "Managed OpenShift Customer may create IngressControllers without necessary taints. This can cause those workloads to be provisioned on infra or master nodes." }, { "webhookName": "namespace-validation", "rules": [ { "operations": [ "CREATE", "UPDATE", "DELETE" ], "apiGroups": [ "" ], "apiVersions": [ "*" ], "resources": [ "namespaces" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift Customers may not modify namespaces specified in the [openshift-monitoring/managed-namespaces openshift-monitoring/ocp-namespaces] ConfigMaps because customer workloads should be placed in customer-created namespaces. Customers may not create namespaces identified by this regular expression (^comUSD|^ioUSD|^inUSD) because it could interfere with critical DNS resolution. Additionally, customers may not set or change the values of these Namespace labels [managed.openshift.io/storage-pv-quota-exempt managed.openshift.io/service-lb-quota-exempt]." }, { "webhookName": "networkpolicies-validation", "rules": [ { "operations": [ "CREATE", "UPDATE", "DELETE" ], "apiGroups": [ "networking.k8s.io" ], "apiVersions": [ "*" ], "resources": [ "networkpolicies" ], "scope": "Namespaced" } ], "documentString": "Managed OpenShift Customers may not create NetworkPolicies in namespaces managed by Red Hat." }, { "webhookName": "node-validation-osd", "rules": [ { "operations": [ "CREATE", "UPDATE", "DELETE" ], "apiGroups": [ "" ], "apiVersions": [ "*" ], "resources": [ "nodes", "nodes/*" ], "scope": "*" } ], "documentString": "Managed OpenShift customers may not alter Node objects." }, { "webhookName": "pod-validation", "rules": [ { "operations": [ "*" ], "apiGroups": [ "v1" ], "apiVersions": [ "*" ], "resources": [ "pods" ], "scope": "Namespaced" } ], "documentString": "Managed OpenShift Customers may use tolerations on Pods that could cause those Pods to be scheduled on infra or master nodes." }, { "webhookName": "prometheusrule-validation", "rules": [ { "operations": [ "CREATE", "UPDATE", "DELETE" ], "apiGroups": [ "monitoring.coreos.com" ], "apiVersions": [ "*" ], "resources": [ "prometheusrules" ], "scope": "Namespaced" } ], "documentString": "Managed OpenShift Customers may not create PrometheusRule in namespaces managed by Red Hat." }, { "webhookName": "regular-user-validation", "rules": [ { "operations": [ "*" ], "apiGroups": [ "cloudcredential.openshift.io", "machine.openshift.io", "admissionregistration.k8s.io", "addons.managed.openshift.io", "cloudingress.managed.openshift.io", "managed.openshift.io", "ocmagent.managed.openshift.io", "splunkforwarder.managed.openshift.io", "upgrade.managed.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "*/*" ], "scope": "*" }, { "operations": [ "*" ], "apiGroups": [ "autoscaling.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "clusterautoscalers", "machineautoscalers" ], "scope": "*" }, { "operations": [ "*" ], "apiGroups": [ "config.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "clusterversions", "clusterversions/status", "schedulers", "apiservers", "proxies" ], "scope": "*" }, { "operations": [ "CREATE", "UPDATE", "DELETE" ], "apiGroups": [ "" ], "apiVersions": [ "*" ], "resources": [ "configmaps" ], "scope": "*" }, { "operations": [ "*" ], "apiGroups": [ "machineconfiguration.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "machineconfigs", "machineconfigpools" ], "scope": "*" }, { "operations": [ "*" ], "apiGroups": [ "operator.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "kubeapiservers", "openshiftapiservers" ], "scope": "*" }, { "operations": [ "*" ], "apiGroups": [ "managed.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "subjectpermissions", "subjectpermissions/*" ], "scope": "*" }, { "operations": [ "*" ], "apiGroups": [ "network.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "netnamespaces", "netnamespaces/*" ], "scope": "*" } ], "documentString": "Managed OpenShift customers may not manage any objects in the following APIGroups [autoscaling.openshift.io network.openshift.io machine.openshift.io admissionregistration.k8s.io addons.managed.openshift.io cloudingress.managed.openshift.io splunkforwarder.managed.openshift.io upgrade.managed.openshift.io managed.openshift.io ocmagent.managed.openshift.io config.openshift.io machineconfiguration.openshift.io operator.openshift.io cloudcredential.openshift.io], nor may Managed OpenShift customers alter the APIServer, KubeAPIServer, OpenShiftAPIServer, ClusterVersion, Proxy or SubjectPermission objects." }, { "webhookName": "scc-validation", "rules": [ { "operations": [ "UPDATE", "DELETE" ], "apiGroups": [ "security.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "securitycontextconstraints" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift Customers may not modify the following default SCCs: [anyuid hostaccess hostmount-anyuid hostnetwork hostnetwork-v2 node-exporter nonroot nonroot-v2 privileged restricted restricted-v2]" }, { "webhookName": "sdn-migration-validation", "rules": [ { "operations": [ "UPDATE" ], "apiGroups": [ "config.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "networks" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift customers may not modify the network config type because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring." }, { "webhookName": "service-mutation", "rules": [ { "operations": [ "CREATE", "UPDATE" ], "apiGroups": [ "" ], "apiVersions": [ "v1" ], "resources": [ "services" ], "scope": "Namespaced" } ], "documentString": "LoadBalancer-type services on Managed OpenShift clusters must contain an additional annotation for managed policy compliance." }, { "webhookName": "serviceaccount-validation", "rules": [ { "operations": [ "DELETE" ], "apiGroups": [ "" ], "apiVersions": [ "v1" ], "resources": [ "serviceaccounts" ], "scope": "Namespaced" } ], "documentString": "Managed OpenShift Customers may not delete the service accounts under the managed namespaces。" }, { "webhookName": "techpreviewnoupgrade-validation", "rules": [ { "operations": [ "CREATE", "UPDATE" ], "apiGroups": [ "config.openshift.io" ], "apiVersions": [ "*" ], "resources": [ "featuregates" ], "scope": "Cluster" } ], "documentString": "Managed OpenShift Customers may not use TechPreviewNoUpgrade FeatureGate that could prevent any future ability to do a y-stream upgrade to their clusters." } ] | [
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> -- ls -alh /var/log",
"total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name> <local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host})",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc <command> --loglevel <log_level>",
"oc whoami -t",
"sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6",
"Resources: ConfigMap: - namespace: openshift-config name: rosa-brand-logo - namespace: openshift-console name: custom-logo - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-config - namespace: openshift-file-integrity name: fr-aide-conf - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-config - namespace: openshift-monitoring name: cluster-monitoring-config - namespace: openshift-monitoring name: managed-namespaces - namespace: openshift-monitoring name: ocp-namespaces - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter-code - namespace: openshift-monitoring name: sre-dns-latency-exporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-ebs-iops-reporter-code - namespace: openshift-monitoring name: sre-ebs-iops-reporter-trusted-ca-bundle - namespace: openshift-monitoring name: sre-stuck-ebs-vols-code - namespace: openshift-monitoring name: sre-stuck-ebs-vols-trusted-ca-bundle - namespace: openshift-security name: osd-audit-policy - namespace: openshift-validation-webhook name: webhook-cert - namespace: openshift name: motd Endpoints: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook Namespace: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-aws-vpce-operator - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-compliance-monkey - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero - name: openshift-monitoring - name: openshift - name: openshift-cluster-version - name: keycloak - name: goalert - name: configure-goalert-operator ReplicationController: - namespace: openshift-monitoring name: sre-ebs-iops-reporter-1 - namespace: openshift-monitoring name: sre-stuck-ebs-vols-1 Secret: - namespace: openshift-authentication name: v4-0-config-user-idp-0-file-data - namespace: openshift-authentication name: v4-0-config-user-template-error - namespace: openshift-authentication name: v4-0-config-user-template-login - namespace: openshift-authentication name: v4-0-config-user-template-provider-selection - namespace: openshift-config name: htpasswd-secret - namespace: openshift-config name: osd-oauth-templates-errors - namespace: openshift-config name: osd-oauth-templates-login - namespace: openshift-config name: osd-oauth-templates-providers - namespace: openshift-config name: rosa-oauth-templates-errors - namespace: openshift-config name: rosa-oauth-templates-login - namespace: openshift-config name: rosa-oauth-templates-providers - namespace: openshift-config name: support - namespace: openshift-config name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-ingress name: tony-devlab-primary-cert-bundle-secret - namespace: openshift-kube-apiserver name: user-serving-cert-000 - namespace: openshift-kube-apiserver name: user-serving-cert-001 - namespace: openshift-monitoring name: dms-secret - namespace: openshift-monitoring name: observatorium-credentials - namespace: openshift-monitoring name: pd-secret - namespace: openshift-scanning name: clam-secrets - namespace: openshift-scanning name: logger-secrets - namespace: openshift-security name: splunk-auth ServiceAccount: - namespace: openshift-backplane-managed-scripts name: osd-backplane - namespace: openshift-backplane-srep name: 6804d07fb268b8285b023bcf65392f0e - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-scanning name: logger-sa - namespace: openshift-scanning name: scanner-sa - namespace: openshift-sre-pruning name: sre-pruner-sa - namespace: openshift-suricata name: suricata-sa - namespace: openshift-validation-webhook name: validation-webhook - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-velero name: velero - namespace: openshift-backplane-srep name: UNIQUE_BACKPLANE_SERVICEACCOUNT_ID Service: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-scanning name: loggerservice - namespace: openshift-security name: audit-exporter - namespace: openshift-validation-webhook name: validation-webhook AddonOperator: - name: addon-operator ValidatingWebhookConfiguration: - name: sre-hiveownership-validation - name: sre-namespace-validation - name: sre-pod-validation - name: sre-prometheusrule-validation - name: sre-regular-user-validation - name: sre-scc-validation - name: sre-techpreviewnoupgrade-validation DaemonSet: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-scanning name: logger - namespace: openshift-scanning name: scanner - namespace: openshift-security name: audit-exporter - namespace: openshift-suricata name: suricata - namespace: openshift-validation-webhook name: validation-webhook DeploymentConfig: - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterRoleBinding: - name: aqua-scanner-binding - name: backplane-cluster-admin - name: backplane-impersonate-cluster-admin - name: bz1980755 - name: configure-alertmanager-operator-prom - name: dedicated-admins-cluster - name: dedicated-admins-registry-cas-cluster - name: logger-clusterrolebinding - name: openshift-backplane-managed-scripts-reader - name: osd-cluster-admin - name: osd-cluster-ready - name: osd-delete-backplane-script-resources - name: osd-delete-ownerrefs-serviceaccounts - name: osd-patch-subscription-source - name: osd-rebalance-infra-nodes - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: splunk-forwarder-operator-clusterrolebinding - name: sre-pod-network-connectivity-check-pruner - name: sre-pruner-buildsdeploys-pruning - name: velero - name: webhook-validation ClusterRole: - name: backplane-cee-readers-cluster - name: backplane-impersonate-cluster-admin - name: backplane-readers-cluster - name: backplane-srep-admins-cluster - name: backplane-srep-admins-project - name: bz1980755 - name: dedicated-admins-aggregate-cluster - name: dedicated-admins-aggregate-project - name: dedicated-admins-cluster - name: dedicated-admins-manage-operators - name: dedicated-admins-project - name: dedicated-admins-registry-cas-cluster - name: dedicated-readers - name: image-scanner - name: logger-clusterrole - name: openshift-backplane-managed-scripts-reader - name: openshift-splunk-forwarder-operator - name: osd-cluster-ready - name: osd-custom-domains-dedicated-admin-cluster - name: osd-delete-backplane-script-resources - name: osd-delete-backplane-serviceaccounts - name: osd-delete-ownerrefs-serviceaccounts - name: osd-get-namespace - name: osd-netnamespaces-dedicated-admin-cluster - name: osd-patch-subscription-source - name: osd-readers-aggregate - name: osd-rebalance-infra-nodes - name: osd-rebalance-infra-nodes-openshift-pod-rebalance - name: pcap-dedicated-admins - name: splunk-forwarder-operator - name: sre-allow-read-machine-info - name: sre-pruner-buildsdeploys-cr - name: webhook-validation-cr RoleBinding: - namespace: kube-system name: cloud-ingress-operator-cluster-config-v1-reader - namespace: kube-system name: managed-velero-operator-cluster-config-v1-reader - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-mustgather - namespace: openshift-backplane-managed-scripts name: backplane-srep-mustgather - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-cloud-ingress-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-custom-domains-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-image-registry name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: admin-dedicated-admins - namespace: openshift-logging name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-logging name: openshift-logging-dedicated-admins - namespace: openshift-logging name: openshift-logging:serviceaccounts:dedicated-admin - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: sre-ebs-iops-reporter-read-machine-info - namespace: openshift-machine-api name: sre-stuck-ebs-vols-read-machine-info - namespace: openshift-managed-node-metadata-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-must-gather-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-network-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-ocm-agent-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-operators-redhat name: admin-dedicated-admins - namespace: openshift-operators-redhat name: admin-system:serviceaccounts:dedicated-admin - namespace: openshift-operators-redhat name: openshift-operators-redhat-dedicated-admins - namespace: openshift-operators-redhat name: openshift-operators-redhat:serviceaccounts:dedicated-admin - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-route-monitor-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-scanning name: scanner-rolebinding - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-splunk-forwarder-operator name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-suricata name: suricata-rolebinding - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-create - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-config-edit - namespace: openshift-user-workload-monitoring name: dedicated-admins-uwm-managed-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: osd-rebalance-infra-nodes-openshift-pod-rebalance - namespace: openshift-velero name: prometheus-k8s Role: - namespace: kube-system name: cluster-config-v1-reader - namespace: kube-system name: cluster-config-v1-reader-cio - namespace: openshift-aqua name: dedicated-admins-openshift-aqua - namespace: openshift-backplane-managed-scripts name: backplane-cee-pcap-collector - namespace: openshift-backplane-managed-scripts name: backplane-srep-pcap-collector - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-codeready-workspaces name: dedicated-admins-openshift-codeready-workspaces - namespace: openshift-config name: dedicated-admins-project-request - namespace: openshift-config name: dedicated-admins-registry-cas-project - namespace: openshift-config name: muo-pullsecret-reader - namespace: openshift-config name: oao-openshiftconfig-reader - namespace: openshift-config name: osd-cluster-ready - namespace: openshift-customer-monitoring name: dedicated-admins-openshift-customer-monitoring - namespace: openshift-customer-monitoring name: prometheus-k8s-openshift-customer-monitoring - namespace: openshift-dns name: dedicated-admins-openshift-dns - namespace: openshift-dns name: osd-rebalance-infra-nodes-openshift-dns - namespace: openshift-ingress-operator name: cloud-ingress-operator - namespace: openshift-ingress name: cloud-ingress-operator - namespace: openshift-kube-apiserver name: cloud-ingress-operator - namespace: openshift-machine-api name: cloud-ingress-operator - namespace: openshift-logging name: dedicated-admins-openshift-logging - namespace: openshift-machine-api name: osd-cluster-ready - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: dedicated-admins-openshift-marketplace - namespace: openshift-monitoring name: backplane-cee - namespace: openshift-monitoring name: muo-monitoring-reader - namespace: openshift-monitoring name: oao-monitoring-manager - namespace: openshift-monitoring name: osd-cluster-ready - namespace: openshift-monitoring name: osd-rebalance-infra-nodes-openshift-monitoring - namespace: openshift-must-gather-operator name: backplane-cee-mustgather - namespace: openshift-must-gather-operator name: backplane-srep-mustgather - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-operators name: dedicated-admins-openshift-operators - namespace: openshift-osd-metrics name: prometheus-k8s - namespace: openshift-rbac-permissions name: prometheus-k8s - namespace: openshift-scanning name: scanner-role - namespace: openshift-security name: osd-rebalance-infra-nodes-openshift-security - namespace: openshift-security name: prometheus-k8s - namespace: openshift-suricata name: suricata-role - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-create-cm - namespace: openshift-user-workload-monitoring name: dedicated-admins-user-workload-monitoring-manage-am-secret - namespace: openshift-user-workload-monitoring name: osd-rebalance-infra-nodes-openshift-user-workload-monitoring - namespace: openshift-velero name: prometheus-k8s CronJob: - namespace: openshift-backplane-managed-scripts name: osd-delete-backplane-script-resources - namespace: openshift-backplane-srep name: osd-delete-ownerrefs-serviceaccounts - namespace: openshift-backplane name: osd-delete-backplane-serviceaccounts - namespace: openshift-machine-api name: osd-disable-cpms - namespace: openshift-marketplace name: osd-patch-subscription-source - namespace: openshift-monitoring name: osd-rebalance-infra-nodes - namespace: openshift-network-diagnostics name: sre-pod-network-connectivity-check-pruner - namespace: openshift-sre-pruning name: builds-pruner - namespace: openshift-sre-pruning name: bz1980755 - namespace: openshift-sre-pruning name: deployments-pruner Job: - namespace: openshift-monitoring name: osd-cluster-ready CredentialsRequest: - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-aws - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-credentials-gcp - namespace: openshift-monitoring name: sre-ebs-iops-reporter-aws-credentials - namespace: openshift-monitoring name: sre-stuck-ebs-vols-aws-credentials - namespace: openshift-velero name: managed-velero-operator-iam-credentials-aws - namespace: openshift-velero name: managed-velero-operator-iam-credentials-gcp APIScheme: - namespace: openshift-cloud-ingress-operator name: rh-api PublishingStrategy: - namespace: openshift-cloud-ingress-operator name: publishingstrategy ScanSettingBinding: - namespace: openshift-compliance name: fedramp-high-ocp - namespace: openshift-compliance name: fedramp-high-rhcos ScanSetting: - namespace: openshift-compliance name: osd TailoredProfile: - namespace: openshift-compliance name: rhcos4-high-rosa OAuth: - name: cluster EndpointSlice: - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-metrics-rhtwg - namespace: openshift-monitoring name: sre-dns-latency-exporter-4cw9r - namespace: openshift-monitoring name: sre-ebs-iops-reporter-6tx5g - namespace: openshift-monitoring name: sre-stuck-ebs-vols-gmdhs - namespace: openshift-scanning name: loggerservice-zprbq - namespace: openshift-security name: audit-exporter-nqfdk - namespace: openshift-validation-webhook name: validation-webhook-97b8t FileIntegrity: - namespace: openshift-file-integrity name: osd-fileintegrity MachineHealthCheck: - namespace: openshift-machine-api name: srep-infra-healthcheck - namespace: openshift-machine-api name: srep-metal-worker-healthcheck - namespace: openshift-machine-api name: srep-worker-healthcheck MachineSet: - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-infra-us-east-1a - namespace: openshift-machine-api name: sbasabat-mc-qhqkn-worker-us-east-1a ContainerRuntimeConfig: - name: custom-crio KubeletConfig: - name: custom-kubelet MachineConfig: - name: 00-master-chrony - name: 00-worker-chrony SubjectPermission: - namespace: openshift-rbac-permissions name: backplane-cee - namespace: openshift-rbac-permissions name: backplane-csa - namespace: openshift-rbac-permissions name: backplane-cse - namespace: openshift-rbac-permissions name: backplane-csm - namespace: openshift-rbac-permissions name: backplane-mobb - namespace: openshift-rbac-permissions name: backplane-srep - namespace: openshift-rbac-permissions name: backplane-tam - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts - namespace: openshift-rbac-permissions name: dedicated-admin-serviceaccounts-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins - namespace: openshift-rbac-permissions name: dedicated-admins-alert-routing-edit - namespace: openshift-rbac-permissions name: dedicated-admins-core-ns - namespace: openshift-rbac-permissions name: dedicated-admins-customer-monitoring - namespace: openshift-rbac-permissions name: osd-delete-backplane-serviceaccounts VeleroInstall: - namespace: openshift-velero name: cluster PrometheusRule: - namespace: openshift-monitoring name: rhmi-sre-cluster-admins - namespace: openshift-monitoring name: rhoam-sre-cluster-admins - namespace: openshift-monitoring name: sre-alertmanager-silences-active - namespace: openshift-monitoring name: sre-alerts-stuck-builds - namespace: openshift-monitoring name: sre-alerts-stuck-volumes - namespace: openshift-monitoring name: sre-cloud-ingress-operator-offline-alerts - namespace: openshift-monitoring name: sre-avo-pendingacceptance - namespace: openshift-monitoring name: sre-configure-alertmanager-operator-offline-alerts - namespace: openshift-monitoring name: sre-control-plane-resizing-alerts - namespace: openshift-monitoring name: sre-dns-alerts - namespace: openshift-monitoring name: sre-ebs-iops-burstbalance - namespace: openshift-monitoring name: sre-elasticsearch-jobs - namespace: openshift-monitoring name: sre-elasticsearch-managed-notification-alerts - namespace: openshift-monitoring name: sre-excessive-memory - namespace: openshift-monitoring name: sre-fr-alerts-low-disk-space - namespace: openshift-monitoring name: sre-haproxy-reload-fail - namespace: openshift-monitoring name: sre-internal-slo-recording-rules - namespace: openshift-monitoring name: sre-kubequotaexceeded - namespace: openshift-monitoring name: sre-leader-election-master-status-alerts - namespace: openshift-monitoring name: sre-managed-kube-apiserver-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-controller-manager-missing-on-node - namespace: openshift-monitoring name: sre-managed-kube-scheduler-missing-on-node - namespace: openshift-monitoring name: sre-managed-node-metadata-operator-alerts - namespace: openshift-monitoring name: sre-managed-notification-alerts - namespace: openshift-monitoring name: sre-managed-upgrade-operator-alerts - namespace: openshift-monitoring name: sre-managed-velero-operator-alerts - namespace: openshift-monitoring name: sre-node-unschedulable - namespace: openshift-monitoring name: sre-oauth-server - namespace: openshift-monitoring name: sre-pending-csr-alert - namespace: openshift-monitoring name: sre-proxy-managed-notification-alerts - namespace: openshift-monitoring name: sre-pruning - namespace: openshift-monitoring name: sre-pv - namespace: openshift-monitoring name: sre-router-health - namespace: openshift-monitoring name: sre-runaway-sdn-preventing-container-creation - namespace: openshift-monitoring name: sre-slo-recording-rules - namespace: openshift-monitoring name: sre-telemeter-client - namespace: openshift-monitoring name: sre-telemetry-managed-labels-recording-rules - namespace: openshift-monitoring name: sre-upgrade-send-managed-notification-alerts - namespace: openshift-monitoring name: sre-uptime-sla ServiceMonitor: - namespace: openshift-monitoring name: sre-dns-latency-exporter - namespace: openshift-monitoring name: sre-ebs-iops-reporter - namespace: openshift-monitoring name: sre-stuck-ebs-vols ClusterUrlMonitor: - namespace: openshift-route-monitor-operator name: api RouteMonitor: - namespace: openshift-route-monitor-operator name: console NetworkPolicy: - namespace: openshift-deployment-validation-operator name: allow-from-openshift-insights - namespace: openshift-deployment-validation-operator name: allow-from-openshift-olm ManagedNotification: - namespace: openshift-ocm-agent-operator name: sre-elasticsearch-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-proxy-managed-notifications - namespace: openshift-ocm-agent-operator name: sre-upgrade-managed-notifications OcmAgent: - namespace: openshift-ocm-agent-operator name: ocmagent - namespace: openshift-security name: audit-exporter Console: - name: cluster CatalogSource: - namespace: openshift-addon-operator name: addon-operator-catalog - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator-registry - namespace: openshift-compliance name: compliance-operator-registry - namespace: openshift-container-security name: container-security-operator-registry - namespace: openshift-custom-domains-operator name: custom-domains-operator-registry - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-catalog - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator-registry - namespace: openshift-file-integrity name: file-integrity-operator-registry - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-catalog - namespace: openshift-monitoring name: configure-alertmanager-operator-registry - namespace: openshift-must-gather-operator name: must-gather-operator-registry - namespace: openshift-observability-operator name: observability-operator-catalog - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-registry - namespace: openshift-osd-metrics name: osd-metrics-exporter-registry - namespace: openshift-rbac-permissions name: rbac-permissions-operator-registry - namespace: openshift-route-monitor-operator name: route-monitor-operator-registry - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-catalog - namespace: openshift-velero name: managed-velero-operator-registry OperatorGroup: - namespace: openshift-addon-operator name: addon-operator-og - namespace: openshift-aqua name: openshift-aqua - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-codeready-workspaces name: openshift-codeready-workspaces - namespace: openshift-compliance name: compliance-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-customer-monitoring name: openshift-customer-monitoring - namespace: openshift-deployment-validation-operator name: deployment-validation-operator-og - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-logging name: openshift-logging - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator-og - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator-og - namespace: openshift-ocm-agent-operator name: ocm-agent-operator-og - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator-og - namespace: openshift-velero name: managed-velero-operator Subscription: - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-compliance name: compliance-operator-sub - namespace: openshift-container-security name: container-security-operator-sub - namespace: openshift-custom-domains-operator name: custom-domains-operator - namespace: openshift-deployment-validation-operator name: deployment-validation-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-file-integrity name: file-integrity-operator-sub - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: must-gather-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-splunk-forwarder-operator name: openshift-splunk-forwarder-operator - namespace: openshift-velero name: managed-velero-operator PackageManifest: - namespace: openshift-splunk-forwarder-operator name: splunk-forwarder-operator - namespace: openshift-addon-operator name: addon-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator - namespace: openshift-cloud-ingress-operator name: cloud-ingress-operator - namespace: openshift-managed-node-metadata-operator name: managed-node-metadata-operator - namespace: openshift-velero name: managed-velero-operator - namespace: openshift-deployment-validation-operator name: managed-upgrade-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-container-security name: container-security-operator - namespace: openshift-route-monitor-operator name: route-monitor-operator - namespace: openshift-file-integrity name: file-integrity-operator - namespace: openshift-custom-domains-operator name: managed-node-metadata-operator - namespace: openshift-route-monitor-operator name: custom-domains-operator - namespace: openshift-managed-upgrade-operator name: managed-upgrade-operator - namespace: openshift-ocm-agent-operator name: ocm-agent-operator - namespace: openshift-observability-operator name: observability-operator - namespace: openshift-monitoring name: configure-alertmanager-operator - namespace: openshift-must-gather-operator name: deployment-validation-operator - namespace: openshift-osd-metrics name: osd-metrics-exporter - namespace: openshift-compliance name: compliance-operator - namespace: openshift-rbac-permissions name: rbac-permissions-operator Status: - {} Project: - name: dedicated-admin - name: openshift-addon-operator - name: openshift-aqua - name: openshift-backplane - name: openshift-backplane-cee - name: openshift-backplane-csa - name: openshift-backplane-cse - name: openshift-backplane-csm - name: openshift-backplane-managed-scripts - name: openshift-backplane-mobb - name: openshift-backplane-srep - name: openshift-backplane-tam - name: openshift-cloud-ingress-operator - name: openshift-codeready-workspaces - name: openshift-compliance - name: openshift-container-security - name: openshift-custom-domains-operator - name: openshift-customer-monitoring - name: openshift-deployment-validation-operator - name: openshift-managed-node-metadata-operator - name: openshift-file-integrity - name: openshift-logging - name: openshift-managed-upgrade-operator - name: openshift-must-gather-operator - name: openshift-observability-operator - name: openshift-ocm-agent-operator - name: openshift-operators-redhat - name: openshift-osd-metrics - name: openshift-rbac-permissions - name: openshift-route-monitor-operator - name: openshift-scanning - name: openshift-security - name: openshift-splunk-forwarder-operator - name: openshift-sre-pruning - name: openshift-suricata - name: openshift-validation-webhook - name: openshift-velero ClusterResourceQuota: - name: loadbalancer-quota - name: persistent-volume-quota SecurityContextConstraints: - name: osd-scanning-scc - name: osd-suricata-scc - name: pcap-dedicated-admins - name: splunkforwarder SplunkForwarder: - namespace: openshift-security name: splunkforwarder Group: - name: cluster-admins - name: dedicated-admins User: - name: backplane-cluster-admin Backup: - namespace: openshift-velero name: daily-full-backup-20221123112305 - namespace: openshift-velero name: daily-full-backup-20221125042537 - namespace: openshift-velero name: daily-full-backup-20221126010038 - namespace: openshift-velero name: daily-full-backup-20221127010039 - namespace: openshift-velero name: daily-full-backup-20221128010040 - namespace: openshift-velero name: daily-full-backup-20221129050847 - namespace: openshift-velero name: hourly-object-backup-20221128051740 - namespace: openshift-velero name: hourly-object-backup-20221128061740 - namespace: openshift-velero name: hourly-object-backup-20221128071740 - namespace: openshift-velero name: hourly-object-backup-20221128081740 - namespace: openshift-velero name: hourly-object-backup-20221128091740 - namespace: openshift-velero name: hourly-object-backup-20221129050852 - namespace: openshift-velero name: hourly-object-backup-20221129051747 - namespace: openshift-velero name: weekly-full-backup-20221116184315 - namespace: openshift-velero name: weekly-full-backup-20221121033854 - namespace: openshift-velero name: weekly-full-backup-20221128020040 Schedule: - namespace: openshift-velero name: daily-full-backup - namespace: openshift-velero name: hourly-object-backup - namespace: openshift-velero name: weekly-full-backup",
"apiVersion: v1 kind: ConfigMap metadata: name: ocp-namespaces namespace: openshift-monitoring data: managed_namespaces.yaml: | Resources: Namespace: - name: kube-system - name: openshift-apiserver - name: openshift-apiserver-operator - name: openshift-authentication - name: openshift-authentication-operator - name: openshift-cloud-controller-manager - name: openshift-cloud-controller-manager-operator - name: openshift-cloud-credential-operator - name: openshift-cloud-network-config-controller - name: openshift-cluster-api - name: openshift-cluster-csi-drivers - name: openshift-cluster-machine-approver - name: openshift-cluster-node-tuning-operator - name: openshift-cluster-samples-operator - name: openshift-cluster-storage-operator - name: openshift-config - name: openshift-config-managed - name: openshift-config-operator - name: openshift-console - name: openshift-console-operator - name: openshift-console-user-settings - name: openshift-controller-manager - name: openshift-controller-manager-operator - name: openshift-dns - name: openshift-dns-operator - name: openshift-etcd - name: openshift-etcd-operator - name: openshift-host-network - name: openshift-image-registry - name: openshift-ingress - name: openshift-ingress-canary - name: openshift-ingress-operator - name: openshift-insights - name: openshift-kni-infra - name: openshift-kube-apiserver - name: openshift-kube-apiserver-operator - name: openshift-kube-controller-manager - name: openshift-kube-controller-manager-operator - name: openshift-kube-scheduler - name: openshift-kube-scheduler-operator - name: openshift-kube-storage-version-migrator - name: openshift-kube-storage-version-migrator-operator - name: openshift-machine-api - name: openshift-machine-config-operator - name: openshift-marketplace - name: openshift-monitoring - name: openshift-multus - name: openshift-network-diagnostics - name: openshift-network-operator - name: openshift-nutanix-infra - name: openshift-oauth-apiserver - name: openshift-openstack-infra - name: openshift-operator-lifecycle-manager - name: openshift-operators - name: openshift-ovirt-infra - name: openshift-sdn - name: openshift-ovn-kubernetes - name: openshift-platform-operators - name: openshift-route-controller-manager - name: openshift-service-ca - name: openshift-service-ca-operator - name: openshift-user-workload-monitoring - name: openshift-vsphere-infra",
"addon-namespaces: ocs-converged-dev: openshift-storage managed-api-service-internal: redhat-rhoami-operator codeready-workspaces-operator: codeready-workspaces-operator managed-odh: redhat-ods-operator codeready-workspaces-operator-qe: codeready-workspaces-operator-qe integreatly-operator: redhat-rhmi-operator nvidia-gpu-addon: redhat-nvidia-gpu-addon integreatly-operator-internal: redhat-rhmi-operator rhoams: redhat-rhoam-operator ocs-converged: openshift-storage addon-operator: redhat-addon-operator prow-operator: prow cluster-logging-operator: openshift-logging advanced-cluster-management: redhat-open-cluster-management cert-manager-operator: redhat-cert-manager-operator dba-operator: addon-dba-operator reference-addon: redhat-reference-addon ocm-addon-test-operator: redhat-ocm-addon-test-operator",
"[ { \"webhookName\": \"clusterlogging-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"logging.openshift.io\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"clusterloggings\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may set log retention outside the allowed range of 0-7 days\" }, { \"webhookName\": \"clusterrolebindings-validation\", \"rules\": [ { \"operations\": [ \"DELETE\" ], \"apiGroups\": [ \"rbac.authorization.k8s.io\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"clusterrolebindings\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not delete the cluster role bindings under the managed namespaces: (^openshift-.*|kube-system)\" }, { \"webhookName\": \"customresourcedefinitions-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"apiextensions.k8s.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"customresourcedefinitions\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not change CustomResourceDefinitions managed by Red Hat.\" }, { \"webhookName\": \"hiveownership-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"quota.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterresourcequotas\" ], \"scope\": \"Cluster\" } ], \"webhookObjectSelector\": { \"matchLabels\": { \"hive.openshift.io/managed\": \"true\" } }, \"documentString\": \"Managed OpenShift customers may not edit certain managed resources. A managed resource has a \\\"hive.openshift.io/managed\\\": \\\"true\\\" label.\" }, { \"webhookName\": \"imagecontentpolicies-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"imagedigestmirrorsets\", \"imagetagmirrorsets\" ], \"scope\": \"Cluster\" }, { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"imagecontentsourcepolicies\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not create ImageContentSourcePolicy, ImageDigestMirrorSet, or ImageTagMirrorSet resources that configure mirrors that would conflict with system registries (e.g. quay.io, registry.redhat.io, registry.access.redhat.com, etc). For more details, see https://docs.openshift.com/\" }, { \"webhookName\": \"ingress-config-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"ingresses\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not modify ingress config resources because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring.\" }, { \"webhookName\": \"ingresscontroller-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"ingresscontroller\", \"ingresscontrollers\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customer may create IngressControllers without necessary taints. This can cause those workloads to be provisioned on infra or master nodes.\" }, { \"webhookName\": \"namespace-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"namespaces\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not modify namespaces specified in the [openshift-monitoring/managed-namespaces openshift-monitoring/ocp-namespaces] ConfigMaps because customer workloads should be placed in customer-created namespaces. Customers may not create namespaces identified by this regular expression (^comUSD|^ioUSD|^inUSD) because it could interfere with critical DNS resolution. Additionally, customers may not set or change the values of these Namespace labels [managed.openshift.io/storage-pv-quota-exempt managed.openshift.io/service-lb-quota-exempt].\" }, { \"webhookName\": \"networkpolicies-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"networking.k8s.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"networkpolicies\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not create NetworkPolicies in namespaces managed by Red Hat.\" }, { \"webhookName\": \"node-validation-osd\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"nodes\", \"nodes/*\" ], \"scope\": \"*\" } ], \"documentString\": \"Managed OpenShift customers may not alter Node objects.\" }, { \"webhookName\": \"pod-validation\", \"rules\": [ { \"operations\": [ \"*\" ], \"apiGroups\": [ \"v1\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"pods\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may use tolerations on Pods that could cause those Pods to be scheduled on infra or master nodes.\" }, { \"webhookName\": \"prometheusrule-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"monitoring.coreos.com\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"prometheusrules\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not create PrometheusRule in namespaces managed by Red Hat.\" }, { \"webhookName\": \"regular-user-validation\", \"rules\": [ { \"operations\": [ \"*\" ], \"apiGroups\": [ \"cloudcredential.openshift.io\", \"machine.openshift.io\", \"admissionregistration.k8s.io\", \"addons.managed.openshift.io\", \"cloudingress.managed.openshift.io\", \"managed.openshift.io\", \"ocmagent.managed.openshift.io\", \"splunkforwarder.managed.openshift.io\", \"upgrade.managed.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"*/*\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"autoscaling.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterautoscalers\", \"machineautoscalers\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"clusterversions\", \"clusterversions/status\", \"schedulers\", \"apiservers\", \"proxies\" ], \"scope\": \"*\" }, { \"operations\": [ \"CREATE\", \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"configmaps\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"machineconfiguration.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"machineconfigs\", \"machineconfigpools\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"operator.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"kubeapiservers\", \"openshiftapiservers\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"managed.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"subjectpermissions\", \"subjectpermissions/*\" ], \"scope\": \"*\" }, { \"operations\": [ \"*\" ], \"apiGroups\": [ \"network.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"netnamespaces\", \"netnamespaces/*\" ], \"scope\": \"*\" } ], \"documentString\": \"Managed OpenShift customers may not manage any objects in the following APIGroups [autoscaling.openshift.io network.openshift.io machine.openshift.io admissionregistration.k8s.io addons.managed.openshift.io cloudingress.managed.openshift.io splunkforwarder.managed.openshift.io upgrade.managed.openshift.io managed.openshift.io ocmagent.managed.openshift.io config.openshift.io machineconfiguration.openshift.io operator.openshift.io cloudcredential.openshift.io], nor may Managed OpenShift customers alter the APIServer, KubeAPIServer, OpenShiftAPIServer, ClusterVersion, Proxy or SubjectPermission objects.\" }, { \"webhookName\": \"scc-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\", \"DELETE\" ], \"apiGroups\": [ \"security.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"securitycontextconstraints\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not modify the following default SCCs: [anyuid hostaccess hostmount-anyuid hostnetwork hostnetwork-v2 node-exporter nonroot nonroot-v2 privileged restricted restricted-v2]\" }, { \"webhookName\": \"sdn-migration-validation\", \"rules\": [ { \"operations\": [ \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"networks\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift customers may not modify the network config type because it can can degrade cluster operators and can interfere with OpenShift SRE monitoring.\" }, { \"webhookName\": \"service-mutation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"services\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"LoadBalancer-type services on Managed OpenShift clusters must contain an additional annotation for managed policy compliance.\" }, { \"webhookName\": \"serviceaccount-validation\", \"rules\": [ { \"operations\": [ \"DELETE\" ], \"apiGroups\": [ \"\" ], \"apiVersions\": [ \"v1\" ], \"resources\": [ \"serviceaccounts\" ], \"scope\": \"Namespaced\" } ], \"documentString\": \"Managed OpenShift Customers may not delete the service accounts under the managed namespaces。\" }, { \"webhookName\": \"techpreviewnoupgrade-validation\", \"rules\": [ { \"operations\": [ \"CREATE\", \"UPDATE\" ], \"apiGroups\": [ \"config.openshift.io\" ], \"apiVersions\": [ \"*\" ], \"resources\": [ \"featuregates\" ], \"scope\": \"Cluster\" } ], \"documentString\": \"Managed OpenShift Customers may not use TechPreviewNoUpgrade FeatureGate that could prevent any future ability to do a y-stream upgrade to their clusters.\" } ]"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/support/troubleshooting |
Providing Feedback on Red Hat Documentation | Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_virtual_machine_subscriptions_in_red_hat_satellite/providing-feedback-on-red-hat-documentation_vm-subs-satellite |
Chapter 3. Configuration fields | Chapter 3. Configuration fields This section describes the both required and optional configuration fields when deploying Red Hat Quay. 3.1. Required configuration fields The fields required to configure Red Hat Quay are covered in the following sections: General required fields Storage for images Database for metadata Redis for build logs and user events Tag expiration options 3.2. Automation options The following sections describe the available automation options for Red Hat Quay deployments: Pre-configuring Red Hat Quay for automation Using the API to create the first user 3.3. Optional configuration fields Optional fields for Red Hat Quay can be found in the following sections: Basic configuration SSL LDAP Repository mirroring Quota management Security scanner Helm Action log Build logs Dockerfile build OAuth Configuring nested repositories Adding other OCI media types to Quay Mail User Recaptcha ACI JWT App tokens Miscellaneous User interface v2 IPv6 configuration field Legacy options 3.4. General required fields The following table describes the required configuration fields for a Red Hat Quay deployment: Table 3.1. General required fields Field Type Description AUTHENTICATION_TYPE (Required) String The authentication engine to use for credential authentication. Values: One of Database , LDAP , JWT , Keystone , Default: Database PREFERRED_URL_SCHEME (Required) String The URL scheme to use when accessing Red Hat Quay. Values: One of http , https Default: http SERVER_HOSTNAME (Required) String The URL at which Red Hat Quay is accessible, without the scheme. Example: quay-server.example.com DATABASE_SECRET_KEY (Required) String Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. SECRET_KEY (Required) String Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Red Hat Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. SETUP_COMPLETE (Required) Boolean This is an artifact left over from earlier versions of the software and currently it must be specified with a value of true . 3.5. Database configuration This section describes the database configuration fields available for Red Hat Quay deployments. 3.5.1. Database URI With Red Hat Quay, connection to the database is configured by using the required DB_URI field. The following table describes the DB_URI configuration field: Table 3.2. Database URI Field Type Description DB_URI (Required) String The URI for accessing the database, including any credentials. Example DB_URI field: postgresql://quayuser:[email protected]:5432/quay 3.5.2. Database connection arguments Optional connection arguments are configured by the DB_CONNECTION_ARGS parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS are generic, while others are database specific. The following table describes database connection arguments: Table 3.3. Database connection arguments Field Type Description DB_CONNECTION_ARGS Object Optional connection arguments for the database, such as timeouts and SSL/TLS. .autorollback Boolean Whether to use thread-local connections. Should always be true .threadlocals Boolean Whether to use auto-rollback connections. Should always be true 3.5.2.1. PostgreSQL SSL/TLS connection arguments With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration: DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert The sslmode option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes: Table 3.4. SSL/TLS options Mode Description disable Your configuration only tries non-SSL/TLS connections. allow Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. prefer (Default) Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. require Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. verify-ca Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). verify-full Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions . 3.5.2.2. MySQL SSL/TLS connection arguments The following example shows a sample MySQL SSL/TLS configuration: DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs . 3.6. Image storage This section details the image storage features and configuration fields that are available with Red Hat Quay. 3.6.1. Image storage features The following table describes the image storage features for Red Hat Quay: Table 3.5. Storage config features Field Type Description FEATURE_REPO_MIRROR Boolean If set to true, enables repository mirroring. Default: false FEATURE_PROXY_STORAGE Boolean Whether to proxy all direct download URLs in storage through NGINX. Default: false FEATURE_STORAGE_REPLICATION Boolean Whether to automatically replicate between storage engines. Default: false 3.6.2. Image storage configuration fields The following table describes the image storage configuration fields for Red Hat Quay: Table 3.6. Storage config fields Field Type Description DISTRIBUTED_STORAGE_CONFIG (Required) Object Configuration for storage engine(s) to use in Red Hat Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. Default: [] DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS (Required) Array of string The list of storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG ) whose images should be fully replicated, by default, to all other storage engines. DISTRIBUTED_STORAGE_PREFERENCE (Required) Array of string The preferred storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG ) to use. A preferred engine means it is first checked for pulling and images are pushed to it. Default: false MAXIMUM_LAYER_SIZE String Maximum allowed size of an image layer. Pattern : ^[0-9]+(G|M)USD Example : 100G Default: 20G 3.6.3. Local storage The following YAML shows a sample configuration using local storage: DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default 3.6.4. OCS/NooBaa The following YAML shows a sample configuration using an Open Container Storage/NooBaa instance: DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: access_key_here secret_key: secret_key_here bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56 hostname: s3.openshift-storage.svc.cluster.local is_secure: 'true' port: '443' storage_path: /datastorage/registry 3.6.5. Ceph/RadosGW storage The following examples show two possible YAML configurations when using Ceph/RadosGW. Example A: Using RadosGW with the radosGWStorage driver DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: - RadosGWStorage - access_key: <access_key_here> secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' storage_path: /datastorage/registry Example B: Using RadosGW with general s3 access DISTRIBUTED_STORAGE_CONFIG: s3Storage: 1 - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true secret_key: <secret_key_here> storage_path: /datastorage/registry 1 Used for general s3 access. Note that general s3 access is not strictly limited to Amazon Web Services (AWS) 3, and can be used with RadosGW or other storage services. For an example of general s3 access using the AWS S3 driver, see "AWS S3 storage". 3.6.6. AWS S3 storage The following YAML shows a sample configuration using AWS S3 storage. DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage 1 - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage 1 The S3Storage storage driver should only be used for AWS S3 buckets. Note that this differs from general S3 access, where the RadosGW driver or other storage services can be used. For an example, see "Example B: Using RadosGW with general S3 access". 3.6.7. Google Cloud Storage The following YAML shows a sample configuration using Google Cloud Storage: DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage 3.6.8. Azure Storage The following YAML shows a sample configuration using Azure Storage: DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage 1 The endpoint_url parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, the endpoint_url will connect to the normal Azure region. As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error: AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary . 3.6.9. Swift storage The following YAML shows a sample configuration using Swift storage: DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 1 ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage 3.6.10. Nutanix object storage The following YAML shows a sample configuration using Nutanix object storage. DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - nutanixStorage 3.6.11. IBM Cloud object storage The following YAML shows a sample configuration using IBM Cloud object storage. DISTRIBUTED_STORAGE_CONFIG: default: - IBMCloudStorage #actual driver - access_key: <access_key_here> #parameters secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100mb 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default 1 Optional. Recommended to be set to 100mb . 3.7. Redis configuration fields This section details the configuration fields available for Redis deployments. 3.7.1. Build logs The following build logs configuration fields are available for Redis deployments: Table 3.7. Build logs configuration Field Type Description BUILDLOGS_REDIS (Required) Object Redis connection details for build logs caching. .host (Required) String The hostname at which Redis is accessible. Example: quay-server.example.com .port (Required) Number The port at which Redis is accessible. Example: 6379 .password String The password to connect to the Redis instance. Example: strongpassword .ssl (Optional) Boolean Whether to enable TLS communication between Redis and Quay. Defaults to false. 3.7.2. User events The following user event fields are available for Redis deployments: Table 3.8. User events config Field Type Description USER_EVENTS_REDIS (Required) Object Redis connection details for user event handling. .host (Required) String The hostname at which Redis is accessible. Example: quay-server.example.com .port (Required) Number The port at which Redis is accessible. Example: 6379 .password String The password to connect to the Redis instance. Example: strongpassword .ssl Boolean Whether to enable TLS communication between Redis and Quay. Defaults to false. .ssl_keyfile (Optional) String The name of the key database file, which houses the client certificate to be used. Example: ssl_keyfile: /path/to/server/privatekey.pem .ssl_certfile (Optional) String Used for specifying the file path of the SSL certificate. Example: ssl_certfile: /path/to/server/certificate.pem .ssl_cert_reqs (Optional) String Used to specify the level of certificate validation to be performed during the SSL/TLS handshake. Example: ssl_cert_reqs: CERT_REQUIRED .ssl_ca_certs (Optional) String Used to specify the path to a file containing a list of trusted Certificate Authority (CA) certificates. Example: ssl_ca_certs: /path/to/ca_certs.pem .ssl_ca_data (Optional) String Used to specify a string containing the trusted CA certificates in PEM format. Example: ssl_ca_data: <certificate> .ssl_check_hostname (Optional) Boolean Used when setting up an SSL/TLS connection to a server. It specifies whether the client should check that the hostname in the server's SSL/TLS certificate matches the hostname of the server it is connecting to. Example: ssl_check_hostname: true 3.7.3. Example Redis configuration The following YAML shows a sample configuration using Redis with optional SSL/TLS fields: BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true ssl_*: <path_location_or_certificate> Note If your deployment uses Azure Cache for Redis and ssl is set to true , the port defaults to 6380 . 3.8. ModelCache configuration options The following options are available on Red Hat Quay for configuring ModelCache. 3.8.1. Memcache configuration option Memcache is the default ModelCache configuration option. With Memcache, no additional configuration is necessary. 3.8.2. Single Redis configuration option The following configuration is for a single Redis instance with optional read-only replicas: DATA_MODEL_CACHE_CONFIG: engine: redis redis_config: primary: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > replica: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > 3.8.3. Clustered Redis configuration option Use the following configuration for a clustered Redis instance: DATA_MODEL_CACHE_CONFIG: engine: rediscluster redis_config: startup_nodes: - host: <cluster-host> port: <port> password: <password if ssl: true> read_from_replicas: <true|false> skip_full_coverage_check: <true | false> ssl: <true | false > 3.9. Tag expiration configuration fields The following tag expiration configuration fields are available with Red Hat Quay: Table 3.9. Tag expiration configuration fields Field Type Description FEATURE_GARBAGE_COLLECTION Boolean Whether garbage collection of repositories is enabled. Default: True TAG_EXPIRATION_OPTIONS (Required) Array of string If enabled, the options that users can select for expiration of tags in their namespace. Pattern: ^[0-9]+(w|m|d|h|s)USD DEFAULT_TAG_EXPIRATION (Required) String The default, configurable tag expiration time for time machine. Pattern: ^[0-9]+(w|m|d|h|s)USD Default: 2w FEATURE_CHANGE_TAG_EXPIRATION Boolean Whether users and organizations are allowed to change the tag expiration for tags in their namespace. Default: True FEATURE_AUTO_PRUNE Boolean When set to True , enables functionality related to the auto-pruning of tags. Default: False 3.9.1. Example tag expiration configuration The following YAML shows a sample tag expiration configuration: DEFAULT_TAG_EXPIRATION: 2w TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w 3.10. Quota management configuration fields Table 3.10. Quota management configuration Field Type Description FEATURE_QUOTA_MANAGEMENT Boolean Enables configuration, caching, and validation for quota management feature. DEFAULT_SYSTEM_REJECT_QUOTA_BYTES String Enables system default quota reject byte allowance for all organizations. By default, no limit is set. QUOTA_BACKFILL Boolean Enables the quota backfill worker to calculate the size of pre-existing blobs. Default : True QUOTA_TOTAL_DELAY_SECONDS String The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete. Default : 1800 PERMANENTLY_DELETE_TAGS Boolean Enables functionality related to the removal of tags from the time machine window. Default : False RESET_CHILD_MANIFEST_EXPIRATION Boolean Resets the expirations of temporary tags targeting the child manifests. With this feature set to True , child manifests are immediately garbage collected. Default : False 3.10.1. Example quota management configuration The following YAML is the suggested configuration when enabling quota management. Quota management YAML configuration FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true 3.11. Proxy cache configuration fields Table 3.11. Proxy configuration Field Type Description FEATURE_PROXY_CACHE Boolean Enables Red Hat Quay to act as a pull through cache for upstream registries. Default : false 3.12. Robot account configuration fields Table 3.12. Robot account configuration fields Field Type Description ROBOTS_DISALLOW Boolean When set to true , robot accounts are prevented from all interactions, as well as from being created Default : False 3.13. Pre-configuring Red Hat Quay for automation Red Hat Quay supports several configuration options that enable automation. Users can configure these options before deployment to reduce the need for interaction with the user interface. 3.13.1. Allowing the API to create the first user To create the first user, users need to set the FEATURE_USER_INITIALIZE parameter to true and call the /api/v1/user/initialize API. Unlike all other registry API calls that require an OAuth token generated by an OAuth application in an existing organization, the API endpoint does not require authentication. Users can use the API to create a user such as quayadmin after deploying Red Hat Quay, provided no other users have been created. For more information, see Using the API to create the first user . 3.13.2. Enabling general API access Users should set the BROWSER_API_CALLS_XHR_ONLY configuration option to false to allow general access to the Red Hat Quay registry API. 3.13.3. Adding a superuser After deploying Red Hat Quay, users can create a user and give the first user administrator privileges with full permissions. Users can configure full permissions in advance by using the SUPER_USER configuration object. For example: # ... SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin # ... 3.13.4. Restricting user creation After you have configured a superuser, you can restrict the ability to create new users to the superuser group by setting the FEATURE_USER_CREATION to false . For example: # ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ... 3.13.5. Enabling new functionality in Red Hat Quay 3.10 To use new Red Hat Quay 3.10 functions, enable some or all of the following features: # ... FEATURE_UI_V2: true FEATURE_UI_V2_REPO_SETTINGS: true FEATURE_AUTO_PRUNE: true ROBOTS_DISALLOW: false # ... 3.13.6. Suggested configuration for automation The following config.yaml parameters are suggested for automation: # ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ... 3.13.7. Deploying the Red Hat Quay Operator using the initial configuration Use the following procedure to deploy Red Hat Quay on OpenShift Container Platform using the initial configuration. Prerequisites You have installed the oc CLI. Procedure Create a secret using the configuration file: USD oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret Create a quayregistry.yaml file. Identify the unmanaged components and reference the created secret, for example: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret Deploy the Red Hat Quay registry: USD oc create -n quay-enterprise -f quayregistry.yaml Steps Using the API to create the first user 3.13.8. Using the API to create the first user Use the following procedure to create the first user in your Red Hat Quay organization. Prerequisites The config option FEATURE_USER_INITIALIZE must be set to true . No users can already exist in the database. Procedure This procedure requests an OAuth token by specifying "access_token": true . Open your Red Hat Quay configuration file and update the following configuration fields: FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin Stop the Red Hat Quay service by entering the following command: USD sudo podman stop quay Start the Red Hat Quay service by entering the following command: USD sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv} Run the following CURL command to generate a new user with a username, password, email, and access token: USD curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "[email protected]", "access_token": true}' If successful, the command returns an object with the username, email, and encrypted password. For example: {"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"[email protected]","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow If a user already exists in the database, an error is returned: {"message":"Cannot initialize user in a non-empty database"} If your password is not at least eight characters or contains whitespace, an error is returned: {"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."} Log in to your Red Hat Quay deployment by entering the following command: USD sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false Example output Login Succeeded! 3.13.8.1. Using the OAuth token After invoking the API, you can call out the rest of the Red Hat Quay API by specifying the returned OAuth code. Prerequisites You have invoked the /api/v1/user/initialize API, and passed in the username, password, and email address. Procedure Obtain the list of current users by entering the following command: USD curl -X GET -k -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/ Example output: { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": { "name": "quayadmin", "hash": "3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c", "color": "#e7ba52", "kind": "user" }, "super_user": true, "enabled": true } ] } In this instance, the details for the quayadmin user are returned as it is the only user that has been created so far. 3.13.8.2. Using the API to create an organization The following procedure details how to use the API to create a Red Hat Quay organization. Prerequisites You have invoked the /api/v1/user/initialize API, and passed in the username, password, and email address. You have called out the rest of the Red Hat Quay API by specifying the returned OAuth code. Procedure To create an organization, use a POST call to api/v1/organization/ endpoint: USD curl -X POST -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{"name": "testorg", "email": "[email protected]"}' Example output: "Created" You can retrieve the details of the organization you created by entering the following command: USD curl -X GET -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg Example output: { "name": "testorg", "email": "[email protected]", "avatar": { "name": "testorg", "hash": "5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8", "color": "#a55194", "kind": "user" }, "is_admin": true, "is_member": true, "teams": { "owners": { "name": "owners", "description": "", "role": "admin", "avatar": { "name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team" }, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false } }, "ordered_teams": [ "owners" ], "invoice_email": false, "invoice_email_address": null, "tag_expiration_s": 1209600, "is_free_account": true } 3.14. Basic configuration fields Table 3.13. Basic configuration Field Type Description REGISTRY_TITLE String If specified, the long-form title for the registry. Displayed in frontend of your Red Hat Quay deployment, for example, at the sign in page of your organization. Should not exceed 35 characters. Default: Red Hat Quay REGISTRY_TITLE_SHORT String If specified, the short-form title for the registry. Title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization's Tutorial page. Default: Red Hat Quay CONTACT_INFO Array of String If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly. [0] String Adds a link to send an e-mail. Pattern: ^mailto:(.)+USD Example: mailto:[email protected] [1] String Adds a link to visit an IRC chat room. Pattern: ^irc://(.)+USD Example: irc://chat.freenode.net:6665/quay [2] String Adds a link to call a phone number. Pattern: ^tel:(.)+USD Example: tel:+1-888-930-3475 [3] String Adds a link to a defined URL. Pattern: ^http(s)?://(.)+USD Example: https://twitter.com/quayio 3.15. SSL configuration fields Table 3.14. SSL configuration Field Type Description PREFERRED_URL_SCHEME String One of http or https . Note that users only set their PREFERRED_URL_SCHEME to http when there is no TLS encryption in the communication path from the client to Quay. Users must set their PREFERRED_URL_SCHEME`to `https when using a TLS-terminating load balancer, a reverse proxy (for example, Nginx), or when using Quay with custom SSL certificates directly. In most cases, the PREFERRED_URL_SCHEME should be https . Default: http SERVER_HOSTNAME (Required) String The URL at which Red Hat Quay is accessible, without the scheme Example: quay-server.example.com SSL_CIPHERS Array of String If specified, the nginx-defined list of SSL ciphers to enabled and disabled Example: [ ECDHE-RSA-AES128-GCM-SHA256 , ECDHE-ECDSA-AES128-GCM-SHA256 , ECDHE-RSA-AES256-GCM-SHA384 , ECDHE-ECDSA-AES256-GCM-SHA384 , DHE-RSA-AES128-GCM-SHA256 , DHE-DSS-AES128-GCM-SHA256 , kEDH+AESGCM , ECDHE-RSA-AES128-SHA256 , ECDHE-ECDSA-AES128-SHA256 , ECDHE-RSA-AES128-SHA , ECDHE-ECDSA-AES128-SHA , ECDHE-RSA-AES256-SHA384 , ECDHE-ECDSA-AES256-SHA384 , ECDHE-RSA-AES256-SHA , ECDHE-ECDSA-AES256-SHA , DHE-RSA-AES128-SHA256 , DHE-RSA-AES128-SHA , DHE-DSS-AES128-SHA256 , DHE-RSA-AES256-SHA256 , DHE-DSS-AES256-SHA , DHE-DSS-AES256-SHA , AES128-GCM-SHA256 , AES256-GCM-SHA384 , AES128-SHA256 , AES256-SHA256 , AES128-SHA , AES256-SHA , AES , !3DES" , !aNULL , !eNULL , !EXPORT , DES , !RC4 , MD5 , !PSK , !aECDH , !EDH-DSS-DES-CBC3-SHA , !EDH-RSA-DES-CBC3-SHA , !KRB5-DES-CBC3-SHA ] SSL_PROTOCOLS Array of String If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Red Hat Quay startup. Example: ['TLSv1','TLSv1.1','TLSv1.2', `TLSv1.3 ]` SESSION_COOKIE_SECURE Boolean Whether the secure property should be set on session cookies Default: False Recommendation: Set to True for all installations using SSL 3.15.1. Configuring SSL Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: Edit the config.yaml file and specify that you want Quay to handle TLS: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Stop the Quay container and restart the registry 3.16. Adding TLS Certificates to the Red Hat Quay Container To add custom TLS certificates to Red Hat Quay, create a new directory named extra_ca_certs/ beneath the Red Hat Quay config directory. Copy any required site-specific TLS certificates to this new directory. 3.16.1. Add TLS certificates to Red Hat Quay View certificate to be added to the container Create certs directory and copy certificate there Obtain the Quay container's CONTAINER ID with podman ps : Restart the container with that ID: Examine the certificate copied into the container namespace: 3.17. LDAP configuration fields Table 3.15. LDAP configuration Field Type Description AUTHENTICATION_TYPE (Required) String Must be set to LDAP . FEATURE_TEAM_SYNCING Boolean Whether to allow for team membership to be synced from a backing group in the authentication engine (LDAP or Keystone). Default: true FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP Boolean If enabled, non-superusers can setup syncing on teams using LDAP. Default: false LDAP_ADMIN_DN String The admin DN for LDAP authentication. LDAP_ADMIN_PASSWD String The admin password for LDAP authentication. LDAP_ALLOW_INSECURE_FALLBACK Boolean Whether or not to allow SSL insecure fallback for LDAP authentication. LDAP_BASE_DN Array of String The base DN for LDAP authentication. LDAP_EMAIL_ATTR String The email attribute for LDAP authentication. LDAP_UID_ATTR String The uid attribute for LDAP authentication. LDAP_URI String The LDAP URI. LDAP_USER_FILTER String The user filter for LDAP authentication. LDAP_USER_RDN Array of String The user RDN for LDAP authentication. TEAM_RESYNC_STALE_TIME String If team syncing is enabled for a team, how often to check its membership and resync if necessary. Pattern: ^[0-9]+(w|m|d|h|s)USD Example: 2h Default: 30m LDAP_SUPERUSER_FILTER String Subset of the LDAP_USER_FILTER configuration field. When configured, allows Red Hat Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as superusers when Red Hat Quay uses LDAP as its authentication provider. With this field, administrators can add or remove superusers without having to update the Red Hat Quay configuration file and restart their deployment. This field requires that your AUTHENTICATION_TYPE is set to LDAP . LDAP_RESTRICTED_USER_FILTER String Subset of the LDAP_USER_FILTER configuration field. When configured, allows Red Hat Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as restricted users when Red Hat Quay uses LDAP as its authentication provider. This field requires that your AUTHENTICATION_TYPE is set to LDAP . LDAP_TIMEOUT Integer Determines the maximum time period. in seconds, allowed for establishing a connection to the Lightweight Directory Access Protocol (LDAP) server. Default: 10 LDAP_NETWORK_TIMEOUT Integer Defines the maximum time duration, in seconds, that Red Hat Quay waits for a response from the Lightweight Directory Access Protocol (LDAP) server during network operations. Default: 10 3.17.1. LDAP configuration references Use the following references to update your config.yaml file with the desired configuration field. 3.17.1.1. Basic LDAP configuration --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldaps://<ldap_url_domain_name> LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com 3.17.1.2. LDAP restricted user configuration --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com --- 3.17.1.3. LDAP superuser configuration reference --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com 3.18. Mirroring configuration fields Table 3.16. Mirroring configuration Field Type Description FEATURE_REPO_MIRROR Boolean Enable or disable repository mirroring Default: false REPO_MIRROR_INTERVAL Number The number of seconds between checking for repository mirror candidates Default: 30 REPO_MIRROR_SERVER_HOSTNAME String Replaces the SERVER_HOSTNAME as the destination for mirroring. Default: None Example : openshift-quay-service REPO_MIRROR_TLS_VERIFY Boolean Require HTTPS and verify certificates of Quay registry during mirror. Default: false REPO_MIRROR_ROLLBACK Boolean When set to true , the repository rolls back after a failed mirror attempt. Default : false 3.19. Security scanner configuration fields Table 3.17. Security scanner configuration Field Type Description FEATURE_SECURITY_SCANNER Boolean Enable or disable the security scanner Default: false FEATURE_SECURITY_NOTIFICATIONS Boolean If the security scanner is enabled, turn on or turn off security notifications Default: false SECURITY_SCANNER_V4_REINDEX_THRESHOLD String This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the last_indexed datetime in the manifestsecuritystatus table. This parameter is used to avoid trying to re-index every failed manifest on every indexing run. The default time to re-index is 300 seconds. SECURITY_SCANNER_V4_ENDPOINT String The endpoint for the V4 security scanner Pattern: ^http(s)?://(.)+USD Example: http://192.168.99.101:6060 SECURITY_SCANNER_V4_PSK String The generated pre-shared key (PSK) for Clair SECURITY_SCANNER_ENDPOINT String The endpoint for the V2 security scanner Pattern: ^http(s)?://(.)+USD Example: http://192.168.99.100:6060 SECURITY_SCANNER_INDEXING_INTERVAL Integer This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Red Hat Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing. Default: 30 FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX Boolean Whether to allow sending notifications about vulnerabilities for new pushes. Default : True SECURITY_SCANNER_V4_MANIFEST_CLEANUP Boolean Whether the Red Hat Quay garbage collector removes manifests that are not referenced by other tags or manifests. Default *: True 3.19.1. Re-indexing with Clair v4 When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine ( /indexer/api/v1/index_state ) to determine whether the scanner configuration has been changed. Red Hat Quay leverages this index state by saving it to the index report when parsing to Quay's database. If this state has changed since the manifest was previously scanned, Red Hat Quay will attempt to re-index that manifest during the periodic indexing process. By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Red Hat Quay database. 3.19.2. Example security scanner configuration The following YAML is the suggested configuration when enabling the security scanner feature. Security scanner YAML configuration FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true ... SECURITY_SCANNER_INDEXING_INTERVAL: 30 SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081 SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ== SERVER_HOSTNAME: quay-server.example.com ... 3.20. Helm configuration fields Table 3.18. Helm configuration fields Field Type Description FEATURE_GENERAL_OCI_SUPPORT Boolean Enable support for OCI artifacts. Default: True The following Open Container Initiative (OCI) artifact types are built into Red Hat Quay by default and are enabled through the FEATURE_GENERAL_OCI_SUPPORT configuration field: Field Media Type Supported content types Helm application/vnd.cncf.helm.config.v1+json application/tar+gzip , application/vnd.cncf.helm.chart.content.v1.tar+gzip Cosign application/vnd.oci.image.config.v1+json application/vnd.dev.cosign.simplesigning.v1+json , application/vnd.dsse.envelope.v1+json SPDX application/vnd.oci.image.config.v1+json text/spdx , text/spdx+xml , text/spdx+json Syft application/vnd.oci.image.config.v1+json application/vnd.syft+json CycloneDX application/vnd.oci.image.config.v1+json application/vnd.cyclonedx , application/vnd.cyclonedx+xml , application/vnd.cyclonedx+json In-toto application/vnd.oci.image.config.v1+json application/vnd.in-toto+json Unknown application/vnd.cncf.openpolicyagent.policy.layer.v1+rego application/vnd.cncf.openpolicyagent.policy.layer.v1+rego , application/vnd.cncf.openpolicyagent.data.layer.v1+json 3.20.1. Configuring Helm The following YAML is the example configuration when enabling Helm. Helm YAML configuration FEATURE_GENERAL_OCI_SUPPORT: true 3.21. Open Container Initiative configuration fields Table 3.19. Additional OCI artifact configuration field Field Type Description ALLOWED_OCI_ARTIFACT_TYPES Object The set of allowed OCI artifact mimetypes and the associated layer types. 3.21.1. Configuring additional artifact types Other OCI artifact types that are not supported by default can be added to your Red Hat Quay deployment by using the ALLOWED_OCI_ARTIFACT_TYPES configuration field. Use the following reference to add additional OCI artifact types: OCI artifact types configuration FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4> For example, you can add Singularity (SIF) support by adding the following to your config.yaml file: Example OCI artifact type configuration ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar Note When adding OCI artifact types that are not configured by default, Red Hat Quay administrators will also need to manually add support for cosign and Helm if desired. 3.22. Unknown media types Table 3.20. Unknown media types configuration field Field Type Description IGNORE_UNKNOWN_MEDIATYPES Boolean When enabled, allows a container registry platform to disregard specific restrictions on supported artifact types and accept any unrecognized or unknown media types. Default: false 3.22.1. Configuring unknown media types The following YAML is the example configuration when enabling unknown or unrecognized media types. Unknown media types YAML configuration IGNORE_UNKNOWN_MEDIATYPES: true 3.23. Action log configuration fields 3.23.1. Action log storage configuration Table 3.21. Action log storage configuration Field Type Description FEATURE_LOG_EXPORT Boolean Whether to allow exporting of action logs. Default: True LOGS_MODEL String Specifies the preferred method for handling log data. Values: One of database , transition_reads_both_writes_es , elasticsearch , splunk Default: database LOGS_MODEL_CONFIG Object Logs model config for action logs. LOGS_MODEL_CONFIG [object]: Logs model config for action logs. elasticsearch_config [object]: Elasticsearch cluster configuration. access_key [string]: Elasticsearch user (or IAM key for AWS ES). Example : some_string host [string]: Elasticsearch cluster endpoint. Example : host.elasticsearch.example index_prefix [string]: Elasticsearch's index prefix. Example : logentry_ index_settings [object]: Elasticsearch's index settings use_ssl [boolean]: Use ssl for Elasticsearch. Defaults to True . Example : True secret_key [string]: Elasticsearch password (or IAM secret for AWS ES). Example : some_secret_string aws_region [string]: Amazon web service region. Example : us-east-1 port [number]: Elasticsearch cluster endpoint port. Example : 1234 kinesis_stream_config [object]: AWS Kinesis Stream configuration. aws_secret_key [string]: AWS secret key. Example : some_secret_key stream_name [string]: Kinesis stream to send action logs to. Example : logentry-kinesis-stream aws_access_key [string]: AWS access key. Example : some_access_key retries [number]: Max number of attempts made on a single request. Example : 5 read_timeout [number]: Number of seconds before timeout when reading from a connection. Example : 5 max_pool_connections [number]: The maximum number of connections to keep in a connection pool. Example : 10 aws_region [string]: AWS region. Example : us-east-1 connect_timeout [number]: Number of seconds before timeout when attempting to make a connection. Example : 5 producer [string]: Logs producer if logging to Elasticsearch. enum : kafka, elasticsearch, kinesis_stream Example : kafka kafka_config [object]: Kafka cluster configuration. topic [string]: Kafka topic to publish log entries to. Example : logentry bootstrap_servers [array]: List of Kafka brokers to bootstrap the client from. max_block_seconds [number]: Max number of seconds to block during a send() , either because the buffer is full or metadata unavailable. Example : 10 producer [string]: splunk splunk_config [object]: Logs model configuration for Splunk action logs or the Splunk cluster configuration. host [string]: Splunk cluster endpoint. port [integer]: Splunk management cluster endpoint port. bearer_token [string]: The bearer token for Splunk. verify_ssl [boolean]: Enable ( True ) or disable ( False ) TLS/SSL verification for HTTPS connections. index_prefix [string]: Splunk's index prefix. ssl_ca_path [string]: The relative container path to a single .pem file containing a certificate authority (CA) for SSL validation. 3.23.2. Action log rotation and archiving configuration Table 3.22. Action log rotation and archiving configuration Field Type Description FEATURE_ACTION_LOG_ROTATION Boolean Enabling log rotation and archival will move all logs older than 30 days to storage. Default: false ACTION_LOG_ARCHIVE_LOCATION String If action log archiving is enabled, the storage engine in which to place the archived data. Example: : s3_us_east ACTION_LOG_ARCHIVE_PATH String If action log archiving is enabled, the path in storage in which to place the archived data. Example: archives/actionlogs ACTION_LOG_ROTATION_THRESHOLD String The time interval after which to rotate logs. Example: 30d 3.23.3. Action log audit configuration Table 3.23. Audit logs configuration field Field Type Description ACTION_LOG_AUDIT_LOGINS Boolean When set to True , tracks advanced events such as logging into, and out of, the UI, and logging in using Docker for regular users, robot accounts, and for application-specific token accounts. Default: True 3.24. Build logs configuration fields Table 3.24. Build logs configuration fields Field Type Description FEATURE_READER_BUILD_LOGS Boolean If set to true, build logs can be read by those with read access to the repository, rather than only write access or admin access. Default: False LOG_ARCHIVE_LOCATION String The storage location, defined in DISTRIBUTED_STORAGE_CONFIG , in which to place the archived build logs. Example: s3_us_east LOG_ARCHIVE_PATH String The path under the configured storage engine in which to place the archived build logs in .JSON format. Example: archives/buildlogs 3.25. Dockerfile build triggers fields Table 3.25. Dockerfile build support Field Type Description FEATURE_BUILD_SUPPORT Boolean Whether to support Dockerfile build. Default: False SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD Number If not set to None , the number of successive failures that can occur before a build trigger is automatically disabled. Default: 100 SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD Number If not set to None , the number of successive internal errors that can occur before a build trigger is automatically disabled Default: 5 3.25.1. GitHub build triggers Table 3.26. GitHub build triggers Field Type Description FEATURE_GITHUB_BUILD Boolean Whether to support GitHub build triggers. Default: False GITHUB_TRIGGER_CONFIG Object Configuration for using GitHub Enterprise for build triggers. .GITHUB_ENDPOINT (Required) String The endpoint for GitHub Enterprise. Example: https://github.com/ .API_ENDPOINT String The endpoint of the GitHub Enterprise API to use. Must be overridden for github.com . Example : https://api.github.com/ .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance; this cannot be shared with GITHUB_LOGIN_CONFIG . .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. 3.25.2. BitBucket build triggers Table 3.27. BitBucket build triggers Field Type Description FEATURE_BITBUCKET_BUILD Boolean Whether to support Bitbucket build triggers. Default: False BITBUCKET_TRIGGER_CONFIG Object Configuration for using BitBucket for build triggers. .CONSUMER_KEY (Required) String The registered consumer key (client ID) for this Red Hat Quay instance. .CONSUMER_SECRET (Required) String The registered consumer secret (client secret) for this Red Hat Quay instance. 3.25.3. GitLab build triggers Table 3.28. GitLab build triggers Field Type Description FEATURE_GITLAB_BUILD Boolean Whether to support GitLab build triggers. Default: False GITLAB_TRIGGER_CONFIG Object Configuration for using Gitlab for build triggers. .GITLAB_ENDPOINT (Required) String The endpoint at which Gitlab Enterprise is running. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. 3.26. Build manager configuration fields Table 3.29. Build manager configuration fields Field Type Description ALLOWED_WORKER_COUNT String Defines how many Build Workers are instantiated per Red Hat Quay pod. Typically set to 1 . ORCHESTRATOR_PREFIX String Defines a unique prefix to be added to all Redis keys. This is useful to isolate Orchestrator values from other Redis keys. REDIS_HOST Object The hostname for your Redis service. REDIS_PASSWORD String The password to authenticate into your Redis service. REDIS_SSL Boolean Defines whether or not your Redis connection uses SSL/TLS. REDIS_SKIP_KEYSPACE_EVENT_SETUP Boolean By default, Red Hat Quay does not set up the keyspace events required for key events at runtime. To do so, set REDIS_SKIP_KEYSPACE_EVENT_SETUP to false . EXECUTOR String Starts a definition of an Executor of this type. Valid values are kubernetes and ec2 . BUILDER_NAMESPACE String Kubernetes namespace where Red Hat Quay Builds will take place. K8S_API_SERVER Object Hostname for API Server of the OpenShift Container Platform cluster where Builds will take place. K8S_API_TLS_CA Object The filepath in the Quay container of the Build cluster's CA certificate for the Quay application to trust when making API calls. KUBERNETES_DISTRIBUTION String Indicates which type of Kubernetes is being used. Valid values are openshift and k8s . CONTAINER_ * Object Define the resource requests and limits for each build pod. NODE_SELECTOR_ * Object Defines the node selector label name-value pair where build Pods should be scheduled. CONTAINER_RUNTIME Object Specifies whether the Builder should run docker or podman . Customers using Red Hat's quay-builder image should set this to podman . SERVICE_ACCOUNT_NAME/SERVICE_ACCOUNT_TOKEN Object Defines the Service Account name or token that will be used by build pods. QUAY_USERNAME/QUAY_PASSWORD Object Defines the registry credentials needed to pull the Red Hat Quay build worker image that is specified in the WORKER_IMAGE field. Customers should provide a Red Hat Service Account credential as defined in the section "Creating Registry Service Accounts" against registry.redhat.io in the article at https://access.redhat.com/RegistryAuthentication . WORKER_IMAGE Object Image reference for the Red Hat Quay Builder image. registry.redhat.io/quay/quay-builder WORKER_TAG Object Tag for the Builder image desired. The latest version is 3.10. BUILDER_VM_CONTAINER_IMAGE Object The full reference to the container image holding the internal VM needed to run each Red Hat Quay Build. ( registry.redhat.io/quay/quay-builder-qemu-rhcos:3.10 ). SETUP_TIME String Specifies the number of seconds at which a Build times out if it has not yet registered itself with the Build Manager. Defaults at 500 seconds. Builds that time out are attempted to be restarted three times. If the Build does not register itself after three attempts it is considered failed. MINIMUM_RETRY_THRESHOLD String This setting is used with multiple Executors. It indicates how many retries are attempted to start a Build before a different Executor is chosen. Setting to 0 means there are no restrictions on how many tries the build job needs to have. This value should be kept intentionally small (three or less) to ensure failovers happen quickly during infrastructure failures. You must specify a value for this setting. For example, Kubernetes is set as the first executor and EC2 as the second executor. If you want the last attempt to run a job to always be executed on EC2 and not Kubernetes, you can set the Kubernetes executor's MINIMUM_RETRY_THRESHOLD to 1 and EC2's MINIMUM_RETRY_THRESHOLD to 0 (defaults to 0 if not set). In this case, the Kubernetes' MINIMUM_RETRY_THRESHOLD retries_remaining(1) would evaluate to False , therefore falling back to the second executor configured. SSH_AUTHORIZED_KEYS Object List of SSH keys to bootstrap in the ignition config. This allows other keys to be used to SSH into the EC2 instance or QEMU virtual machine (VM). 3.27. OAuth configuration fields Table 3.30. OAuth fields Field Type Description DIRECT_OAUTH_CLIENTID_WHITELIST Array of String A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval. 3.27.1. GitHub OAuth configuration fields Table 3.31. GitHub OAuth fields Field Type Description FEATURE_GITHUB_LOGIN Boolean Whether GitHub login is supported **Default: False GITHUB_LOGIN_CONFIG Object Configuration for using GitHub (Enterprise) as an external login provider. .ALLOWED_ORGANIZATIONS Array of String The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option. .API_ENDPOINT String The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com Example: https://api.github.com/ .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance; cannot be shared with GITHUB_TRIGGER_CONFIG . Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 .GITHUB_ENDPOINT (Required) String The endpoint for GitHub (Enterprise). Example : https://github.com/ .ORG_RESTRICT Boolean If true, only users within the organization whitelist can login using this provider. 3.27.2. Google OAuth configuration fields Table 3.32. Google OAuth fields Field Type Description FEATURE_GOOGLE_LOGIN Boolean Whether Google login is supported. **Default: False GOOGLE_LOGIN_CONFIG Object Configuration for using Google for external authentication. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 3.28. OIDC configuration fields Table 3.33. OIDC fields Field Type Description <string>_LOGIN_CONFIG (Required) String The parent key that holds the OIDC configuration settings. Typically the name of the OIDC provider, for example, AZURE_LOGIN_CONFIG , however any arbitrary string is accepted. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 .DEBUGLOG Boolean Whether to enable debugging. .LOGIN_BINDING_FIELD String Used when the internal authorization is set to LDAP. Red Hat Quay reads this parameter and tries to search through the LDAP tree for the user with this username. If it exists, it automatically creates a link to that LDAP account. .LOGIN_SCOPES Object Adds additional scopes that Red Hat Quay uses to communicate with the OIDC provider. .OIDC_ENDPOINT_CUSTOM_PARAMS String Support for custom query parameters on OIDC endpoints. The following endpoints are supported: authorization_endpoint , token_endpoint , and user_endpoint . .OIDC_ISSUER String Allows the user to define the issuer to verify. For example, JWT tokens container a parameter known as iss which defines who issued the token. By default, this is read from the .well-know/openid/configuration endpoint, which is exposed by every OIDC provider. If this verification fails, there is no login. .OIDC_SERVER (Required) String The address of the OIDC server that is being used for authentication. Example: https://sts.windows.net/6c878... / .PREFERRED_USERNAME_CLAIM_NAME String Sets the preferred username to a parameter from the token. .SERVICE_ICON String Changes the icon on the login screen. .SERVICE_NAME (Required) String The name of the service that is being authenticated. Example: Azure AD .VERIFIED_EMAIL_CLAIM_NAME String The name of the claim that is used to verify the email address of the user. 3.28.1. OIDC configuration The following example shows a sample OIDC configuration. Example OIDC configuration AZURE_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> OIDC_SERVER: <oidc_server_address_> DEBUGGING: true SERVICE_NAME: Azure AD VERIFIED_EMAIL_CLAIM_NAME: <verified_email> OIDC_ENDPOINT_CUSTOM_PARAMS": "authorization_endpoint": "some": "param", 3.29. Nested repositories configuration fields Support for nested repository path names has been added under the FEATURE_EXTENDED_REPOSITORY_NAMES property. This optional configuration is added to the config.yaml by default. Enablement allows the use of / in repository names. Table 3.34. OCI and nested repositories configuration fields Field Type Description FEATURE_EXTENDED_REPOSITORY_NAMES Boolean Enable support for nested repositories Default: True OCI and nested repositories configuration example FEATURE_EXTENDED_REPOSITORY_NAMES: true 3.30. QuayIntegration configuration fields The following configuration fields are available for the QuayIntegration custom resource: Name Description Schema allowlistNamespaces (Optional) A list of namespaces to include. Array clusterID (Required) The ID associated with this cluster. String credentialsSecret.key (Required) The secret containing credentials to communicate with the Quay registry. Object denylistNamespaces (Optional) A list of namespaces to exclude. Array insecureRegistry (Optional) Whether to skip TLS verification to the Quay registry Boolean quayHostname (Required) The hostname of the Quay registry. String scheduledImageStreamImport (Optional) Whether to enable image stream importing. Boolean 3.31. Mail configuration fields Table 3.35. Mail configuration fields Field Type Description FEATURE_MAILING Boolean Whether emails are enabled Default: False MAIL_DEFAULT_SENDER String If specified, the e-mail address used as the from when Red Hat Quay sends e-mails. If none, defaults to [email protected] Example: [email protected] MAIL_PASSWORD String The SMTP password to use when sending e-mails MAIL_PORT Number The SMTP port to use. If not specified, defaults to 587. MAIL_SERVER String The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true. Example: smtp.example.com MAIL_USERNAME String The SMTP username to use when sending e-mails MAIL_USE_TLS Boolean If specified, whether to use TLS for sending e-mails Default: True 3.32. User configuration fields Table 3.36. User configuration fields Field Type Description FEATURE_SUPER_USERS Boolean Whether superusers are supported Default: true FEATURE_USER_CREATION Boolean Whether users can be created (by non-superusers) Default: true FEATURE_USER_LAST_ACCESSED Boolean Whether to record the last time a user was accessed Default: true FEATURE_USER_LOG_ACCESS Boolean If set to true, users will have access to audit logs for their namespace Default: false FEATURE_USER_METADATA Boolean Whether to collect and support user metadata Default: false FEATURE_USERNAME_CONFIRMATION Boolean If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP. Default: true FEATURE_USER_RENAME Boolean If set to true, users can rename their own namespace Default: false FEATURE_INVITE_ONLY_USER_CREATION Boolean Whether users being created must be invited by another user Default: false FRESH_LOGIN_TIMEOUT String The time after which a fresh login requires users to re-enter their password Example : 5m USERFILES_LOCATION String ID of the storage engine in which to place user-uploaded files Example : s3_us_east USERFILES_PATH String Path under storage in which to place user-uploaded files Example : userfiles USER_RECOVERY_TOKEN_LIFETIME String The length of time a token for recovering a user accounts is valid Pattern : ^[0-9]+(w|m|d|h|s)USD Default : 30m FEATURE_SUPERUSERS_FULL_ACCESS Boolean Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for. Default: False FEATURE_SUPERUSERS_ORG_CREATION_ONLY Boolean Whether to only allow superusers to create organizations. Default: False FEATURE_RESTRICTED_USERS Boolean When set with RESTRICTED_USERS_WHITELIST , restricted users cannot create organizations or content in their own namespace. Normal permissions apply for an organization's membership, for example, a restricted user will still have normal permissions in organizations based on the teams that they are members of. Default: False RESTRICTED_USERS_WHITELIST String When set with FEATURE_RESTRICTED_USERS: true , specific users are excluded from the FEATURE_RESTRICTED_USERS setting. GLOBAL_READONLY_SUPER_USERS String When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. 3.32.1. User configuration fields references Use the following references to update your config.yaml file with the desired configuration field. 3.32.1.1. FEATURE_SUPERUSERS_FULL_ACCESS configuration reference --- SUPER_USERS: - quayadmin FEATURE_SUPERUSERS_FULL_ACCESS: True --- 3.32.1.2. GLOBAL_READONLY_SUPER_USERS configuration reference --- GLOBAL_READONLY_SUPER_USERS: - user1 --- 3.32.1.3. FEATURE_RESTRICTED_USERS configuration reference --- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true --- 3.32.1.4. RESTRICTED_USERS_WHITELIST configuration reference Prerequisites FEATURE_RESTRICTED_USERS is set to true in your config.yaml file. --- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - user1 --- Note When this field is set, whitelisted users can create organizations, or read or write content from the repository even if FEATURE_RESTRICTED_USERS is set to true . Other users, for example, user2 , user3 , and user4 are restricted from creating organizations, reading, or writing content 3.33. Recaptcha configuration fields Table 3.37. Recaptcha configuration fields Field Type Description FEATURE_RECAPTCHA Boolean Whether Recaptcha is necessary for user login and recovery Default: False RECAPTCHA_SECRET_KEY String If recaptcha is enabled, the secret key for the Recaptcha service RECAPTCHA_SITE_KEY String If recaptcha is enabled, the site key for the Recaptcha service 3.34. ACI configuration fields Table 3.38. ACI configuration fields Field Type Description FEATURE_ACI_CONVERSION Boolean Whether to enable conversion to ACIs Default: False GPG2_PRIVATE_KEY_FILENAME String The filename of the private key used to decrypte ACIs GPG2_PRIVATE_KEY_NAME String The name of the private key used to sign ACIs GPG2_PUBLIC_KEY_FILENAME String The filename of the public key used to encrypt ACIs 3.35. JWT configuration fields Table 3.39. JWT configuration fields Field Type Description JWT_AUTH_ISSUER String The endpoint for JWT users Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_GETUSER_ENDPOINT String The endpoint for JWT users Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_QUERY_ENDPOINT String The endpoint for JWT queries Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_VERIFY_ENDPOINT String The endpoint for JWT verification Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 3.36. App tokens configuration fields Table 3.40. App tokens configuration fields Field Type Description FEATURE_APP_SPECIFIC_TOKENS Boolean If enabled, users can create tokens for use by the Docker CLI Default: True APP_SPECIFIC_TOKEN_EXPIRATION String The expiration for external app tokens. Default None Pattern: ^[0-9]+(w|m|d|h|s)USD EXPIRED_APP_SPECIFIC_TOKEN_GC String Duration of time expired external app tokens will remain before being garbage collected Default: 1d 3.37. Miscellaneous configuration fields Table 3.41. Miscellaneous configuration fields Field Type Description ALLOW_PULLS_WITHOUT_STRICT_LOGGING String If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time. Default: False AVATAR_KIND String The types of avatars to display, either generated inline (local) or Gravatar (gravatar) Values: local, gravatar BROWSER_API_CALLS_XHR_ONLY Boolean If enabled, only API calls marked as being made by an XHR will be allowed from browsers Default: True DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT Number The default maximum number of builds that can be queued in a namespace. Default: None ENABLE_HEALTH_DEBUG_SECRET String If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser EXTERNAL_TLS_TERMINATION Boolean Set to true if TLS is supported, but terminated at a layer before Quay. Set to false when Quay is running with its own SSL certificates and receiving TLS traffic directly. FRESH_LOGIN_TIMEOUT String The time after which a fresh login requires users to re-enter their password Example: 5m HEALTH_CHECKER String The configured health check Example: ('RDSAwareHealthCheck', {'access_key': 'foo', 'secret_key': 'bar'}) PROMETHEUS_NAMESPACE String The prefix applied to all exposed Prometheus metrics Default: quay PUBLIC_NAMESPACES Array of String If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces. REGISTRY_STATE String The state of the registry Values: normal or read-only SEARCH_MAX_RESULT_PAGE_COUNT Number Maximum number of pages the user can paginate in search before they are limited Default: 10 SEARCH_RESULTS_PER_PAGE Number Number of results returned per page by search page Default: 10 V2_PAGINATION_SIZE Number The number of results returned per page in V2 registry APIs Default: 50 WEBHOOK_HOSTNAME_BLACKLIST Array of String The set of hostnames to disallow from webhooks when validating, beyond localhost CREATE_PRIVATE_REPO_ON_PUSH Boolean Whether new repositories created by push are set to private visibility Default: True CREATE_NAMESPACE_ON_PUSH Boolean Whether new push to a non-existent organization creates it Default: False NON_RATE_LIMITED_NAMESPACES Array of String If rate limiting has been enabled using FEATURE_RATE_LIMITS , you can override it for specific namespace that require unlimited access. FEATURE_UI_V2 Boolean When set, allows users to try the beta UI environment. Default: True FEATURE_REQUIRE_TEAM_INVITE Boolean Whether to require invitations when adding a user to a team Default: True FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH Boolean Whether non-encrypted passwords (as opposed to encrypted tokens) can be used for basic auth Default: False FEATURE_RATE_LIMITS Boolean Whether to enable rate limits on API and registry endpoints. Setting FEATURE_RATE_LIMITS to true causes nginx to limit certain API calls to 30 per second. If that feature is not set, API calls are limited to 300 per second (effectively unlimited). Default: False FEATURE_FIPS Boolean If set to true, Red Hat Quay will run using FIPS-compliant hash functions Default: False FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL Boolean Whether to allow retrieval of aggregated log counts Default: True FEATURE_ANONYMOUS_ACCESS Boolean Whether to allow anonymous users to browse and pull public repositories Default: True FEATURE_DIRECT_LOGIN Boolean Whether users can directly login to the UI Default: True FEATURE_LIBRARY_SUPPORT Boolean Whether to allow for "namespace-less" repositories when pulling and pushing from Docker Default: True FEATURE_PARTIAL_USER_AUTOCOMPLETE Boolean If set to true, autocompletion will apply to partial usernames+ Default: True FEATURE_PERMANENT_SESSIONS Boolean Whether sessions are permanent Default: True FEATURE_PUBLIC_CATALOG Boolean If set to true, the _catalog endpoint returns public repositories. Otherwise, only private repositories can be returned. Default: False 3.38. Legacy configuration fields The following fields are deprecated or obsolete. Table 3.42. Legacy configuration fields Field Type Description FEATURE_BLACKLISTED_EMAILS Boolean If set to true, no new User accounts may be created if their email domain is blacklisted BLACKLISTED_EMAIL_DOMAINS Array of String The list of email-address domains that is used if FEATURE_BLACKLISTED_EMAILS is set to true Example: "example.com", "example.org" BLACKLIST_V2_SPEC String The Docker CLI versions to which Red Hat Quay will respond that V2 is unsupported Example : <1.8.0 Default: <1.6.0 DOCUMENTATION_ROOT String Root URL for documentation links SECURITY_SCANNER_V4_NAMESPACE_WHITELIST String The namespaces for which the security scanner should be enabled FEATURE_RESTRICTED_V1_PUSH Boolean If set to true, only namespaces listed in V1_PUSH_WHITELIST support V1 push Default: True V1_PUSH_WHITELIST Array of String The array of namespace names that support V1 push if FEATURE_RESTRICTED_V1_PUSH is set to true FEATURE_HELM_OCI_SUPPORT Boolean Enable support for Helm artifacts. Default: False 3.39. User interface v2 configuration fields Table 3.43. User interface v2 configuration fields Field Type Description FEATURE_UI_V2 Boolean When set, allows users to try the beta UI environment. + Default: False FEATURE_UI_V2_REPO_SETTINGS Boolean When set to True , enables repository settings in the Red Hat Quay v2 UI. + Default: False 3.39.1. v2 user interface configuration With FEATURE_UI_V2 enabled, you can toggle between the current version of the user interface and the new version of the user interface. Important This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags. When running Red Hat Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI. There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Red Hat Quay uses the standard definition of megabyte (MB) to report image manifest sizes. Procedure In your deployment's config.yaml file, add the FEATURE_UI_V2 parameter and set it to true , for example: --- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true --- Log in to your Red Hat Quay deployment. In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI . Click the toggle button to set it to new UI, and then click Use Beta Environment , for example: 3.40. IPv6 configuration field Table 3.44. IPv6 configuration field Field Type Description FEATURE_LISTEN_IP_VERSION String Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Red Hat Quay fails to start. Default: IPv4 Additional configurations: IPv6 , dual-stack 3.41. Branding configuration fields Table 3.45. Branding configuration fields Field Type Description BRANDING Object Custom branding for logos and URLs in the Red Hat Quay UI. .logo (Required) String Main logo image URL. The header logo defaults to 205x30 PX. The form logo on the Red Hat Quay sign in screen of the web UI defaults to 356.5x39.7 PX. Example: /static/img/quay-horizontal-color.svg .footer_img String Logo for UI footer. Defaults to 144x34 PX. Example: /static/img/RedHat.svg .footer_url String Link for footer image. Example: https://redhat.com 3.41.1. Example configuration for Red Hat Quay branding Branding config.yaml example BRANDING: logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_url: https://opensourceworld.org/ 3.42. Session timeout configuration field The following configuration field relies on on the Flask API configuration field of the same name. Table 3.46. Session logout configuration field Field Type Description PERMANENT_SESSION_LIFETIME Integer A timedelta which is used to set the expiration date of a permanent session. The default is 31 days, which makes a permanent session survive for roughly one month. Default: 2678400 3.42.1. Example session timeout configuration The following YAML is the suggest configuration when enabling session lifetime. Important Altering session lifetime is not recommended. Administrators should be aware of the allotted time when setting a session timeout. If you set the time too early, it might interrupt your workflow. Session timeout YAML configuration PERMANENT_SESSION_LIFETIME: 3000 | [
"DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert",
"DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert",
"DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default",
"DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: access_key_here secret_key: secret_key_here bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56 hostname: s3.openshift-storage.svc.cluster.local is_secure: 'true' port: '443' storage_path: /datastorage/registry",
"DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: - RadosGWStorage - access_key: <access_key_here> secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' storage_path: /datastorage/registry",
"DISTRIBUTED_STORAGE_CONFIG: s3Storage: 1 - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true secret_key: <secret_key_here> storage_path: /datastorage/registry",
"DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage 1 - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage",
"DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage",
"DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage",
"DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 1 ca_cert_path: /conf/stack/swift.cert\" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage",
"DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - nutanixStorage",
"DISTRIBUTED_STORAGE_CONFIG: default: - IBMCloudStorage #actual driver - access_key: <access_key_here> #parameters secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100mb 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default",
"BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true ssl_*: <path_location_or_certificate>",
"DATA_MODEL_CACHE_CONFIG: engine: redis redis_config: primary: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > replica: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false >",
"DATA_MODEL_CACHE_CONFIG: engine: rediscluster redis_config: startup_nodes: - host: <cluster-host> port: <port> password: <password if ssl: true> read_from_replicas: <true|false> skip_full_coverage_check: <true | false> ssl: <true | false >",
"DEFAULT_TAG_EXPIRATION: 2w TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w",
"**Default:** `False`",
"FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true",
"SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false",
"FEATURE_UI_V2: true FEATURE_UI_V2_REPO_SETTINGS: true FEATURE_AUTO_PRUNE: true ROBOTS_DISALLOW: false",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false",
"oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret",
"oc create -n quay-enterprise -f quayregistry.yaml",
"FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin",
"sudo podman stop quay",
"sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}",
"curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ \"username\": \"quayadmin\", \"password\":\"quaypass12345\", \"email\": \"[email protected]\", \"access_token\": true}'",
"{\"access_token\":\"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\", \"email\":\"[email protected]\",\"encrypted_password\":\"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW\",\"username\":\"quayadmin\"} # gitleaks:allow",
"{\"message\":\"Cannot initialize user in a non-empty database\"}",
"{\"message\":\"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace.\"}",
"sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false",
"Login Succeeded!",
"curl -X GET -k -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/",
"{ \"users\": [ { \"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quayadmin\", \"hash\": \"3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c\", \"color\": \"#e7ba52\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true } ] }",
"curl -X POST -k --header 'Content-Type: application/json' -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{\"name\": \"testorg\", \"email\": \"[email protected]\"}'",
"\"Created\"",
"curl -X GET -k --header 'Content-Type: application/json' -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg",
"{ \"name\": \"testorg\", \"email\": \"[email protected]\", \"avatar\": { \"name\": \"testorg\", \"hash\": \"5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8\", \"color\": \"#a55194\", \"kind\": \"user\" }, \"is_admin\": true, \"is_member\": true, \"teams\": { \"owners\": { \"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": { \"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\" }, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false } }, \"ordered_teams\": [ \"owners\" ], \"invoice_email\": false, \"invoice_email_address\": null, \"tag_expiration_s\": 1209600, \"is_free_account\": true }",
"cp ~/ssl.cert USDQUAY/config cp ~/ssl.key USDQUAY/config cd USDQUAY/config",
"SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https",
"cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----",
"mkdir -p quay/config/extra_ca_certs cp storage.crt quay/config/extra_ca_certs/ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt",
"sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.10.9 \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller",
"sudo podman restart 5a3e82c4a75f",
"sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV",
"--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldaps://<ldap_url_domain_name> LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com",
"--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com ---",
"--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com",
"FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true SECURITY_SCANNER_INDEXING_INTERVAL: 30 SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081 SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ== SERVER_HOSTNAME: quay-server.example.com",
"FEATURE_GENERAL_OCI_SUPPORT: true",
"FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4>",
"ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar",
"IGNORE_UNKNOWN_MEDIATYPES: true",
"AZURE_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> OIDC_SERVER: <oidc_server_address_> DEBUGGING: true SERVICE_NAME: Azure AD VERIFIED_EMAIL_CLAIM_NAME: <verified_email> OIDC_ENDPOINT_CUSTOM_PARAMS\": \"authorization_endpoint\": \"some\": \"param\",",
"FEATURE_EXTENDED_REPOSITORY_NAMES: true",
"--- SUPER_USERS: - quayadmin FEATURE_SUPERUSERS_FULL_ACCESS: True ---",
"--- GLOBAL_READONLY_SUPER_USERS: - user1 ---",
"--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true ---",
"--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - user1 ---",
"--- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---",
"BRANDING: logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_url: https://opensourceworld.org/",
"PERMANENT_SESSION_LIFETIME: 3000"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/configure_red_hat_quay/config-fields-intro |
Chapter 5. Installing the policy system | Chapter 5. Installing the policy system Installing the Skupper policy system on a cluster allows you control how Skupper is used on the cluster. Note Applying the policy system in a cluster without specific policy rules prohibits site linking and service exposure. If you are installing the policy system on a cluster where there are existing sites, you must create policies before installing the policy system to avoid disruption. Prerequisites Access to a Kubernetes cluster with cluster-admin privileges. The Red Hat Service Interconnect Operator is installed Procedure Log into your cluster. Deploy the policy CRD: where the contents of skupper_cluster_policy_crd.yaml is specified in the Appendix A, YAML for the Skupper policy CRD appendix. Additional information See Securing a service network using policies for more information about using policies. | [
"kubectl apply -f skupper_cluster_policy_crd.yaml customresourcedefinition.apiextensions.k8s.io/skupperclusterpolicies.skupper.io created clusterrole.rbac.authorization.k8s.io/skupper-service-controller created"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/installation/installing-policy-system |
Chapter 5. ValidatingAdmissionPolicy [admissionregistration.k8s.io/v1] | Chapter 5. ValidatingAdmissionPolicy [admissionregistration.k8s.io/v1] Description ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . spec object ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy. status object ValidatingAdmissionPolicyStatus represents the status of an admission validation policy. 5.1.1. .spec Description ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy. Type object Property Type Description auditAnnotations array auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required. auditAnnotations[] object AuditAnnotation describes how to produce an audit annotation for an API request. failurePolicy string failurePolicy defines how to handle failures for the admission policy. Failures can occur from CEL expression parse errors, type check errors, runtime errors and invalid or mis-configured policy definitions or bindings. A policy is invalid if spec.paramKind refers to a non-existent Kind. A binding is invalid if spec.paramRef.name refers to a non-existent resource. failurePolicy does not define how validations that evaluate to false are handled. When failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions define how failures are enforced. Allowed values are Ignore or Fail. Defaults to Fail. Possible enum values: - "Fail" means that an error calling the webhook causes the admission to fail. - "Ignore" means that an error calling the webhook is ignored. matchConditions array MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. If a parameter object is provided, it can be accessed via the params handle in the same manner as validation expressions. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the policy is skipped. 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the policy is skipped matchConditions[] object MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. matchConstraints object MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) paramKind object ParamKind is a tuple of Group Kind and Version. validations array Validations contain CEL expressions which is used to apply the validation. Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is required. validations[] object Validation specifies the CEL expression which is used to apply the validation. variables array Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under variables in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy. The expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic. variables[] object Variable is the definition of a variable that is used for composition. A variable is defined as a named expression. 5.1.2. .spec.auditAnnotations Description auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required. Type array 5.1.3. .spec.auditAnnotations[] Description AuditAnnotation describes how to produce an audit annotation for an API request. Type object Required key valueExpression Property Type Description key string key specifies the audit annotation key. The audit annotation keys of a ValidatingAdmissionPolicy must be unique. The key must be a qualified name ([A-Za-z0-9][-A-Za-z0-9_.]*) no more than 63 bytes in length. The key is combined with the resource name of the ValidatingAdmissionPolicy to construct an audit annotation key: "{ValidatingAdmissionPolicy name}/{key}". If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy and the same audit annotation key, the annotation key will be identical. In this case, the first annotation written with the key will be included in the audit event and all subsequent annotations with the same key will be discarded. Required. valueExpression string valueExpression represents the expression which is evaluated by CEL to produce an audit annotation value. The expression must evaluate to either a string or null value. If the expression evaluates to a string, the audit annotation is included with the string value. If the expression evaluates to null or empty string the audit annotation will be omitted. The valueExpression may be no longer than 5kb in length. If the result of the valueExpression is more than 10kb in length, it will be truncated to 10kb. If multiple ValidatingAdmissionPolicyBinding resources match an API request, then the valueExpression will be evaluated for each binding. All unique values produced by the valueExpressions will be joined together in a comma-separated list. Required. 5.1.4. .spec.matchConditions Description MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. If a parameter object is provided, it can be accessed via the params handle in the same manner as validation expressions. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the policy is skipped. 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the policy is skipped Type array 5.1.5. .spec.matchConditions[] Description MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. Type object Required name expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables: 'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(/pkg/apis/admission/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ Required. name string Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', ' ' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9 .]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName') Required. 5.1.6. .spec.matchConstraints Description MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) Type object Property Type Description excludeResourceRules array ExcludeResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) excludeResourceRules[] object NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. matchPolicy string matchPolicy defines how the "MatchResources" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the ValidatingAdmissionPolicy. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the ValidatingAdmissionPolicy. Defaults to "Equivalent" Possible enum values: - "Equivalent" means requests should be sent to the webhook if they modify a resource listed in rules via another API group or version. - "Exact" means requests should only be sent to the webhook if they exactly match a given rule. namespaceSelector LabelSelector NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the policy on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the validation based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. resourceRules array ResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches any Rule. resourceRules[] object NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. 5.1.7. .spec.matchConstraints.excludeResourceRules Description ExcludeResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded) Type array 5.1.8. .spec.matchConstraints.excludeResourceRules[] Description NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 5.1.9. .spec.matchConstraints.resourceRules Description ResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches any Rule. Type array 5.1.10. .spec.matchConstraints.resourceRules[] Description NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 5.1.11. .spec.paramKind Description ParamKind is a tuple of Group Kind and Version. Type object Property Type Description apiVersion string APIVersion is the API group version the resources belong to. In format of "group/version". Required. kind string Kind is the API kind the resources belong to. Required. 5.1.12. .spec.validations Description Validations contain CEL expressions which is used to apply the validation. Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is required. Type array 5.1.13. .spec.validations[] Description Validation specifies the CEL expression which is used to apply the validation. Type object Required expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec CEL expressions have access to the contents of the API request/response, organized into CEL variables as well as some other useful variables: - 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value. For example, a variable named 'foo' can be accessed as 'variables.foo'. - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. The apiVersion , kind , metadata.name and metadata.generateName are always accessible from the root of the object. No other metadata properties are accessible. Only property names of the form [a-zA-Z_.-/][a-zA-Z0-9_.-/]* are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - ' ' escapes to ' underscores ' - '.' escapes to ' dot ' - '-' escapes to ' dash ' - '/' escapes to ' slash ' - Property names that exactly match a CEL RESERVED keyword escape to ' {keyword} '. The keywords are: "true", "false", "null", "in", "as", "break", "const", "continue", "else", "for", "function", "if", "import", "let", "loop", "package", "namespace", "return". Examples: - Expression accessing a property named "namespace": {"Expression": "object. namespace > 0"} - Expression accessing a property named "x-prop": {"Expression": "object.x dash prop > 0"} - Expression accessing a property named "redact d": {"Expression": "object.redact underscores d > 0"} Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type: - 'set': X + Y performs a union where the array positions of all elements in X are preserved and non-intersecting elements in Y are appended, retaining their partial order. - 'map': X + Y performs a merge where the array positions of all keys in X are preserved but the values are overwritten by values in Y when the key sets of X and Y intersect. Elements in Y with non-intersecting keys are appended, retaining their partial order. Required. message string Message represents the message displayed when validation fails. The message is required if the Expression contains line breaks. The message must not contain line breaks. If unset, the message is "failed rule: {Rule}". e.g. "must be a URL with the host matching spec.host" If the Expression contains line breaks. Message is required. The message must not contain line breaks. If unset, the message is "failed Expression: {Expression}". messageExpression string messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. messageExpression has access to all the same variables as the expression except for 'authorizer' and 'authorizer.requestResource'. Example: "object.x must be less than max ("string(params.max)")" reason string Reason represents a machine-readable description of why this validation failed. If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the HTTP response to the client. The currently supported reasons are: "Unauthorized", "Forbidden", "Invalid", "RequestEntityTooLarge". If not set, StatusReasonInvalid is used in the response to the client. 5.1.14. .spec.variables Description Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under variables in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy. The expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic. Type array 5.1.15. .spec.variables[] Description Variable is the definition of a variable that is used for composition. A variable is defined as a named expression. Type object Required name expression Property Type Description expression string Expression is the expression that will be evaluated as the value of the variable. The CEL expression has access to the same identifiers as the CEL expressions in Validation. name string Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. The variable can be accessed in other expressions through variables For example, if name is "foo", the variable will be available as variables.foo 5.1.16. .status Description ValidatingAdmissionPolicyStatus represents the status of an admission validation policy. Type object Property Type Description conditions array (Condition) The conditions represent the latest available observations of a policy's current state. observedGeneration integer The generation observed by the controller. typeChecking object TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy 5.1.17. .status.typeChecking Description TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy Type object Property Type Description expressionWarnings array The type checking warnings for each expression. expressionWarnings[] object ExpressionWarning is a warning information that targets a specific expression. 5.1.18. .status.typeChecking.expressionWarnings Description The type checking warnings for each expression. Type array 5.1.19. .status.typeChecking.expressionWarnings[] Description ExpressionWarning is a warning information that targets a specific expression. Type object Required fieldRef warning Property Type Description fieldRef string The path to the field that refers the expression. For example, the reference to the expression of the first item of validations is "spec.validations[0].expression" warning string The content of type checking information in a human-readable form. Each line of the warning contains the type that the expression is checked against, followed by the type check error from the compiler. 5.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies DELETE : delete collection of ValidatingAdmissionPolicy GET : list or watch objects of kind ValidatingAdmissionPolicy POST : create a ValidatingAdmissionPolicy /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies GET : watch individual changes to a list of ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name} DELETE : delete a ValidatingAdmissionPolicy GET : read the specified ValidatingAdmissionPolicy PATCH : partially update the specified ValidatingAdmissionPolicy PUT : replace the specified ValidatingAdmissionPolicy /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies/{name} GET : watch changes to an object of kind ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name}/status GET : read status of the specified ValidatingAdmissionPolicy PATCH : partially update status of the specified ValidatingAdmissionPolicy PUT : replace status of the specified ValidatingAdmissionPolicy 5.2.1. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies HTTP method DELETE Description delete collection of ValidatingAdmissionPolicy Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ValidatingAdmissionPolicy Table 5.3. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create a ValidatingAdmissionPolicy Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body ValidatingAdmissionPolicy schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 202 - Accepted ValidatingAdmissionPolicy schema 401 - Unauthorized Empty 5.2.2. /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies HTTP method GET Description watch individual changes to a list of ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the ValidatingAdmissionPolicy HTTP method DELETE Description delete a ValidatingAdmissionPolicy Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ValidatingAdmissionPolicy Table 5.11. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ValidatingAdmissionPolicy Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ValidatingAdmissionPolicy Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body ValidatingAdmissionPolicy schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty 5.2.4. /apis/admissionregistration.k8s.io/v1/watch/validatingadmissionpolicies/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the ValidatingAdmissionPolicy HTTP method GET Description watch changes to an object of kind ValidatingAdmissionPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/{name}/status Table 5.19. Global path parameters Parameter Type Description name string name of the ValidatingAdmissionPolicy HTTP method GET Description read status of the specified ValidatingAdmissionPolicy Table 5.20. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ValidatingAdmissionPolicy Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ValidatingAdmissionPolicy Table 5.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.24. Body parameters Parameter Type Description body ValidatingAdmissionPolicy schema Table 5.25. HTTP responses HTTP code Reponse body 200 - OK ValidatingAdmissionPolicy schema 201 - Created ValidatingAdmissionPolicy schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extension_apis/validatingadmissionpolicy-admissionregistration-k8s-io-v1 |
Chapter 1. Preparing to install on Alibaba Cloud | Chapter 1. Preparing to install on Alibaba Cloud Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on Alibaba Cloud Before installing OpenShift Container Platform on Alibaba Cloud, you must configure and register your domain, create a Resource Access Management (RAM) user for the installation, and review the supported Alibaba Cloud data center regions and zones for the installation. 1.3. Registering and Configuring Alibaba Cloud Domain To install OpenShift Container Platform, the Alibaba Cloud account you use must have a dedicated public hosted zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Alibaba Cloud or another source. Note If you purchase a new domain through Alibaba Cloud, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through Alibaba Cloud, see Alibaba Cloud domains . If you are using an existing domain and registrar, migrate its DNS to Alibaba Cloud. See Domain name transfer in the Alibaba Cloud documentation. Configure DNS for your domain. This includes: Registering a generic domain name . Completing real-name verification for your domain name . Applying for an Internet Content Provider (ICP) filing . Enabling domain name resolution . Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you are using a subdomain, follow the procedures of your company to add its delegation records to the parent domain. 1.4. Supported Alibaba regions You can deploy an OpenShift Container Platform cluster to the regions listed in the Alibaba Regions and zones documentation . 1.5. steps Create the required Alibaba Cloud resources . | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_alibaba/preparing-to-install-on-alibaba |
Chapter 7. ConsoleQuickStart [console.openshift.io/v1] | Chapter 7. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleQuickStartSpec is the desired quick start configuration. 7.1.1. .spec Description ConsoleQuickStartSpec is the desired quick start configuration. Type object Required description displayName durationMinutes introduction tasks Property Type Description accessReviewResources array accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. accessReviewResources[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface conclusion string conclusion sums up the Quick Start and suggests the possible steps. (includes markdown) description string description is the description of the Quick Start. (includes markdown) displayName string displayName is the display name of the Quick Start. durationMinutes integer durationMinutes describes approximately how many minutes it will take to complete the Quick Start. icon string icon is a base64 encoded image that will be displayed beside the Quick Start display name. The icon should be an vector image for easy scaling. The size of the icon should be 40x40. introduction string introduction describes the purpose of the Quick Start. (includes markdown) nextQuickStart array (string) nextQuickStart is a list of the following Quick Starts, suggested for the user to try. prerequisites array (string) prerequisites contains all prerequisites that need to be met before taking a Quick Start. (includes markdown) tags array (string) tags is a list of strings that describe the Quick Start. tasks array tasks is the list of steps the user has to perform to complete the Quick Start. tasks[] object ConsoleQuickStartTask is a single step in a Quick Start. 7.1.2. .spec.accessReviewResources Description accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. Type array 7.1.3. .spec.accessReviewResources[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 7.1.4. .spec.tasks Description tasks is the list of steps the user has to perform to complete the Quick Start. Type array 7.1.5. .spec.tasks[] Description ConsoleQuickStartTask is a single step in a Quick Start. Type object Required description title Property Type Description description string description describes the steps needed to complete the task. (includes markdown) review object review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. summary object summary contains information about the passed step. title string title describes the task and is displayed as a step heading. 7.1.6. .spec.tasks[].review Description review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. Type object Required failedTaskHelp instructions Property Type Description failedTaskHelp string failedTaskHelp contains suggestions for a failed task review and is shown at the end of task. (includes markdown) instructions string instructions contains steps that user needs to take in order to validate his work after going through a task. (includes markdown) 7.1.7. .spec.tasks[].summary Description summary contains information about the passed step. Type object Required failed success Property Type Description failed string failed briefly describes the unsuccessfully passed task. (includes markdown) success string success describes the successfully passed task. 7.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolequickstarts DELETE : delete collection of ConsoleQuickStart GET : list objects of kind ConsoleQuickStart POST : create a ConsoleQuickStart /apis/console.openshift.io/v1/consolequickstarts/{name} DELETE : delete a ConsoleQuickStart GET : read the specified ConsoleQuickStart PATCH : partially update the specified ConsoleQuickStart PUT : replace the specified ConsoleQuickStart 7.2.1. /apis/console.openshift.io/v1/consolequickstarts Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleQuickStart Table 7.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleQuickStart Table 7.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStartList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleQuickStart Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 202 - Accepted ConsoleQuickStart schema 401 - Unauthorized Empty 7.2.2. /apis/console.openshift.io/v1/consolequickstarts/{name} Table 7.9. Global path parameters Parameter Type Description name string name of the ConsoleQuickStart Table 7.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleQuickStart Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.12. Body parameters Parameter Type Description body DeleteOptions schema Table 7.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleQuickStart Table 7.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleQuickStart Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.17. Body parameters Parameter Type Description body Patch schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleQuickStart Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/console_apis/consolequickstart-console-openshift-io-v1 |
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint | Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/accessing-the-rados-object-gateway-s3-endpoint_rhodf |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_virtual_machine_subscriptions/providing-feedback-on-red-hat-documentation_vm-subs-satellite |
5.4. Virtual Memory | 5.4. Virtual Memory 5.4.1. Hot Plugging Virtual Memory You can hot plug virtual memory. Hot plugging means enabling or disabling devices while a virtual machine is running. Each time memory is hot plugged, it appears as a new memory device in the Vm Devices tab in the details view of the virtual machine, up to a maximum of 16 available slots. When the virtual machine is restarted, these devices are cleared from the Vm Devices tab without reducing the virtual machine's memory, allowing you to hot plug more memory devices. If the hot plug fails (for example, if there are no more available slots), the memory increase will be applied when the virtual machine is restarted. Important This feature is currently not supported for the self-hosted engine Manager virtual machine. Note If you might need to later hot unplug the memory that you are now hot plugging, see Hot Unplugging Virtual Memory . Procedure Click Compute Virtual Machines and select a running virtual machine. Click Edit . Click the System tab. Increase the Memory Size by entering the total amount required. Memory can be added in multiples of 256 MB. By default, the maximum memory allowed for the virtual machine is set to 4x the memory size specified. Though the value is changed in the user interface, the maximum value is not hot plugged, and you will see the pending changes icon ( ). To avoid that, you can change the maximum memory back to the original value. Click OK . This action opens the Pending Virtual Machine changes window, as some values such as maxMemorySizeMb and minAllocatedMem will not change until the virtual machine is restarted. However, the hot plug action is triggered by the change to the Memory Size value, which can be applied immediately. Click OK . The virtual machine's Defined Memory is updated in the General tab in the details view. You can see the newly added memory device in the Vm Devices tab in the details view. 5.4.2. Hot Unplugging Virtual Memory You can hot unplug virtual memory. Hot unplugging disables devices while a virtual machine is running. Prerequisites Only memory added with hot plugging can be hot unplugged. The virtual machine's operating system must support memory hot unplugging. The virtual machine must not have a memory balloon device enabled. This feature is disabled by default. All blocks of the hot-plugged memory must be set to online_movable in the virtual machine's device management rules. In virtual machines running up-to-date versions of Red Hat Enterprise Linux or CoreOS, this rule is set by default. For information on device management rules, consult the documentation for the virtual machine's operating system. To ensure that hot plugged memory can be hot unplugged later, add the movable_node option to the kernel command line of the virtual machine as follows and reboot the virtual machine: # grubby --update-kernel=ALL --args="movable_node" For more information, see Setting kernel command-line parameters in the RHEL 8 document Managing, monitoring and updating the kernel . Procedure Click Compute Virtual Machines and select a running virtual machine. Click the Vm Devices tab. In the Hot Unplug column, click Hot Unplug beside the memory device to be removed. Click OK in the Memory Hot Unplug window. The Physical Memory Guaranteed value for the virtual machine is decremented automatically if necessary. | [
"grubby --update-kernel=ALL --args=\"movable_node\""
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-Virtual_Memory |
Chapter 8. Web terminal | Chapter 8. Web terminal 8.1. Installing the web terminal You can install the web terminal by using the Web Terminal Operator listed in the OpenShift Container Platform OperatorHub. When you install the Web Terminal Operator, the custom resource definitions (CRDs) that are required for the command line configuration, such as the DevWorkspace CRD, are automatically installed. The web console creates the required resources when you open the web terminal. Prerequisites You are logged into the OpenShift Container Platform web console. You have cluster administrator permissions. Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for the Web Terminal Operator in the catalog, and then click the Web Terminal tile. Read the brief description about the Operator on the Web Terminal page, and then click Install . On the Install Operator page, retain the default values for all fields. The fast option in the Update Channel menu enables installation of the latest release of the Web Terminal Operator. The All namespaces on the cluster option in the Installation Mode menu enables the Operator to watch and be available to all namespaces in the cluster. The openshift-operators option in the Installed Namespace menu installs the Operator in the default openshift-operators namespace. The Automatic option in the Approval Strategy menu ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager. Click Install . In the Installed Operators page, click the View Operator to verify that the Operator is listed on the Installed Operators page. Note The Web Terminal Operator installs the DevWorkspace Operator as a dependency. After the Operator is installed, refresh your page to see the command line terminal icon ( ) in the masthead of the console. 8.2. Configuring the web terminal You can configure timeout and image settings for the web terminal, either for your current session or for all user sessions if you are a cluster administrator. 8.2.1. Configuring the web terminal timeout for a session You can change the default timeout period for the web terminal for your current session. Prerequisites You have access to an OpenShift Container Platform cluster that has the Web Terminal Operator installed. You are logged into the web console. Procedure Click the web terminal icon ( ). Optional: Set the web terminal timeout for the current session: Click Timeout. In the field that appears, enter the timeout value. From the drop-down list, select a timeout interval of Seconds , Minutes , Hours , or Milliseconds . Optional: Select a custom image for the web terminal to use. Click Image. In the field that appears, enter the URL of the image that you want to use. Click Start to begin a terminal instance using the specified timeout setting. 8.2.2. Configuring the web terminal timeout for all users You can use the Administrator perspective of the web console to set the default web terminal timeout period for all users. Prerequisites You have cluster administrator permissions and are logged in to the web console. You have installed the Web Terminal Operator. Procedure In the Administrator perspective, navigate to Administation Cluster Settings . On the Cluster Settings page, click the Configuration tab. On the Configuration page, click the Console configuration resource with the description operator.openshift.io . From the Action drop-down list, select Customize , which opens the Cluster configuration page. Click the Web Terminal tab, which opens the Web Terminal Configuration page. Set a value for the timeout. From the drop-down list, select a time interval of Seconds , Minutes , Hours , or Milliseconds . Click Save . 8.2.3. Configuring the web terminal image for a session You can change the default image for the web terminal for your current session. Prerequisites You have access to an OpenShift Container Platform cluster that has the Web Terminal Operator installed. You are logged into the web console. Procedure Click the web terminal icon ( ). Click Image to display advanced configuration options for the web terminal image. Enter the URL of the image that you want to use. Click Start to begin a terminal instance using the specified image setting. 8.2.4. Configuring the web terminal image for all users You can use the Administrator perspective of the web console to set the default web terminal image for all users. Prerequisites You have cluster administrator permissions and are logged in to the web console. You have installed the Web Terminal Operator. Procedure In the Administrator perspective, navigate to Administation Cluster Settings . On the Cluster Settings page, click the Configuration tab. On the Configuration page, click the Console configuration resource with the description operator.openshift.io . From the Action drop-down list, select Customize , which opens the Cluster configuration page. Click the Web Terminal tab, which opens the Web Terminal Configuration page. Enter the URL of the image that you want to use. Click Save . 8.3. Using the web terminal You can launch an embedded command line terminal instance in the web console. This terminal instance is preinstalled with common CLI tools for interacting with the cluster, such as oc , kubectl , odo , kn , tkn , helm , and subctl . It also has the context of the project you are working on and automatically logs you in using your credentials. 8.3.1. Accessing the web terminal After the Web Terminal Operator is installed, you can access the web terminal. After the web terminal is initialized, you can use the preinstalled CLI tools like oc , kubectl , odo , kn , tkn , helm , and subctl in the web terminal. You can re-run commands by selecting them from the list of commands you have run in the terminal. These commands persist across multiple terminal sessions. The web terminal remains open until you close it or until you close the browser window or tab. Prerequisites You have access to an OpenShift Container Platform cluster and are logged into the web console. The Web Terminal Operator is installed on your cluster. Procedure To launch the web terminal, click the command line terminal icon ( ) in the masthead of the console. A web terminal instance is displayed in the Command line terminal pane. This instance is automatically logged in with your credentials. If a project has not been selected in the current session, select the project where the DevWorkspace CR must be created from the Project drop-down list. By default, the current project is selected. Note One DevWorkspace CR defines the web terminal of one user. This CR contains details about the user's web terminal status and container image components. The DevWorkspace CR is created only if it does not already exist. The openshift-terminal project is the default project used for cluster administrators. They do not have the option to choose another project. The Web Terminal Operator installs the DevWorkspace Operator as a dependency. Optional: Set the web terminal timeout for the current session: Click Timeout. In the field that appears, enter the timeout value. From the drop-down list, select a timeout interval of Seconds , Minutes , Hours , or Milliseconds . Optional: Select a custom image for the web terminal to use. Click Image. In the field that appears, enter the URL of the image that you want to use. Click Start to initialize the web terminal using the selected project. Click + to open multiple tabs within the web terminal in the console. 8.4. Troubleshooting the web terminal 8.4.1. Web terminal and network policies The web terminal might fail to start if the cluster has network policies configured. To start a web terminal instance, the Web Terminal Operator must communicate with the web terminal's pod to verify it is running, and the OpenShift Container Platform web console needs to send information to automatically log in to the cluster within the terminal. If either step fails, the web terminal fails to start and the terminal panel is in a loading state until a context deadline exceeded error occurs. To avoid this issue, ensure that the network policies for namespaces that are used for terminals allow ingress from the openshift-console and openshift-operators namespaces. The following samples show NetworkPolicy objects for allowing ingress from the openshift-console and openshift-operators namespaces. Allowing ingress from the openshift-console namespace apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-console spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console podSelector: {} policyTypes: - Ingress Allowing ingress from the openshift-operators namespace apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-operators spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-operators podSelector: {} policyTypes: - Ingress 8.5. Uninstalling the web terminal Uninstalling the Web Terminal Operator does not remove any of the custom resource definitions (CRDs) or managed resources that are created when the Operator is installed. For security purposes, you must manually uninstall these components. By removing these components, you save cluster resources because terminals do not idle when the Operator is uninstalled. Uninstalling the web terminal is a two-step process: Uninstall the Web Terminal Operator and related custom resources (CRs) that were added when you installed the Operator. Uninstall the DevWorkspace Operator and its related custom resources that were added as a dependency of the Web Terminal Operator. 8.5.1. Removing the Web Terminal Operator You can uninstall the web terminal by removing the Web Terminal Operator and custom resources used by the Operator. Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions. You have installed the oc CLI. Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Scroll the filter list or type a keyword into the Filter by name box to find the Web Terminal Operator. Click the Options menu for the Web Terminal Operator, and then select Uninstall Operator . In the Uninstall Operator confirmation dialog box, click Uninstall to remove the Operator, Operator deployments, and pods from the cluster. The Operator stops running and no longer receives updates. 8.5.2. Removing the DevWorkspace Operator To completely uninstall the web terminal, you must also remove the DevWorkspace Operator and custom resources used by the Operator. Important The DevWorkspace Operator is a standalone Operator and may be required as a dependency for other Operators installed in the cluster. Follow the steps below only if you are sure that the DevWorkspace Operator is no longer needed. Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions. You have installed the oc CLI. Procedure Remove the DevWorkspace custom resources used by the Operator, along with any related Kubernetes objects: USD oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait USD oc delete devworkspaceroutings.controller.devfile.io --all-namespaces --all --wait Warning If this step is not complete, finalizers make it difficult to fully uninstall the Operator. Remove the CRDs used by the Operator: Warning The DevWorkspace Operator provides custom resource definitions (CRDs) that use conversion webhooks. Failing to remove these CRDs can cause issues in the cluster. USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceroutings.controller.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspacetemplates.workspace.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceoperatorconfigs.controller.devfile.io Verify that all involved custom resource definitions are removed. The following command should not display any output: USD oc get customresourcedefinitions.apiextensions.k8s.io | grep "devfile.io" Remove the devworkspace-webhook-server deployment, mutating, and validating webhooks: USD oc delete deployment/devworkspace-webhook-server -n openshift-operators USD oc delete mutatingwebhookconfigurations controller.devfile.io USD oc delete validatingwebhookconfigurations controller.devfile.io Note If you remove the devworkspace-webhook-server deployment without removing the mutating and validating webhooks, you can not use oc exec commands to run commands in a container in the cluster. After you remove the webhooks you can use the oc exec commands again. Remove any remaining services, secrets, and config maps. Depending on the installation, some resources included in the following commands may not exist in the cluster. USD oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server -n openshift-operators USD oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators USD oc delete clusterrole devworkspace-webhook-server USD oc delete clusterrolebinding devworkspace-webhook-server Uninstall the DevWorkspace Operator: In the Administrator perspective of the web console, navigate to Operators Installed Operators . Scroll the filter list or type a keyword into the Filter by name box to find the DevWorkspace Operator. Click the Options menu for the Operator, and then select Uninstall Operator . In the Uninstall Operator confirmation dialog box, click Uninstall to remove the Operator, Operator deployments, and pods from the cluster. The Operator stops running and no longer receives updates. | [
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-console spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-operators spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-operators podSelector: {} policyTypes: - Ingress",
"oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait",
"oc delete devworkspaceroutings.controller.devfile.io --all-namespaces --all --wait",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceroutings.controller.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspacetemplates.workspace.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceoperatorconfigs.controller.devfile.io",
"oc get customresourcedefinitions.apiextensions.k8s.io | grep \"devfile.io\"",
"oc delete deployment/devworkspace-webhook-server -n openshift-operators",
"oc delete mutatingwebhookconfigurations controller.devfile.io",
"oc delete validatingwebhookconfigurations controller.devfile.io",
"oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server -n openshift-operators",
"oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators",
"oc delete clusterrole devworkspace-webhook-server",
"oc delete clusterrolebinding devworkspace-webhook-server"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/web_console/web-terminal |
8.166. python-weberror | 8.166. python-weberror 8.166.1. RHBA-2013:1723 - python-weberror bug fix update An updated python-weberror package that fixes one bug is now available for Red Hat Enterprise Linux 6. The python-weberror package provides WebError, a web application's error handing library for use as a Web Server Gateway Interface (WSGI) middleware. Bug Fix BZ# 746118 Previously, the WebError middleware used the MD5 algorithm when assigning an identifier to the handled error. However, this algorithm is not, by default, supported by Python's runtime in FIPS mode. Consequently, when web applications raised an exception in FIPS mode and the exception was handled by WebError, incomplete error diagnostics were provided. With this update, error identification based on MD5 is not generated automatically, thus avoiding the problems when the error identifier is not processed further. Users of python-weberror are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/python-weberror |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/overcloud_parameters/making-open-source-more-inclusive |
10.3.6. Establishing a Bond Connection | 10.3.6. Establishing a Bond Connection You can use NetworkManager to create a Bond from two or more Wired or Infiniband connections. It is not necessary to create the connections to be bonded first. They can be configured as part of the process to configure the bond. You must have the MAC addresses of the interfaces available in order to complete the configuration process. Note NetworkManager support for bonding must be enabled by means of the NM_BOND_VLAN_ENABLED directive and then NetworkManager must be restarted. See Section 11.2.1, "Ethernet Interfaces" for an explanation of NM_CONTROLLED and the NM_BOND_VLAN_ENABLED directive. See Section 12.3.4, "Restarting a Service" for an explanation of restarting a service such as NetworkManager from the command line. Alternatively, for a graphical tool see Section 12.2.1, "Using the Service Configuration Utility" . Procedure 10.9. Adding a New Bond Connection You can configure a Bond connection by opening the Network Connections window, clicking Add , and selecting Bond from the list. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Click the Add button to open the selection list. Select Bond and then click Create . The Editing Bond connection 1 window appears. On the Bond tab, click Add and select the type of interface you want to use with the bond connection. Click the Create button. Note that the dialog to select the slave type only comes up when you create the first slave; after that, it will automatically use that same type for all further slaves. The Editing bond0 slave 1 window appears. Fill in the MAC address of the first interface to be bonded. The first slave's MAC address will be used as the MAC address for the bond interface. If required, enter a clone MAC address to be used as the bond's MAC address. Click the Apply button. The Authenticate window appears. Enter the root password to continue. Click the Authenticate button. The name of the bonded slave appears in the Bonded Connections window . Click the Add button to add further slave connections. Review and confirm the settings and then click the Apply button. Edit the bond-specific settings by referring to the section called "Configuring the Bond Tab" below. Figure 10.14. Editing the newly created Bond connection 1 Procedure 10.10. Editing an Existing Bond Connection Follow these steps to edit an existing bond connection. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Select the connection you want to edit and click the Edit button. Select the Bond tab. Configure the connection name, auto-connect behavior, and availability settings. Three settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the Bond section of the Network Connections window. Connect automatically - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See Section 10.2.3, "Connecting to a Network Automatically" for more information. Available to all users - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 10.2.4, "User and System Connections" for details. Edit the bond-specific settings by referring to the section called "Configuring the Bond Tab" below. Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your bond connection, click the Apply button to save your customized configuration. Given a correct configuration, you can connect to your new or customized connection by selecting it from the NetworkManager Notification Area applet. See Section 10.2.1, "Connecting to a Network" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 10.3.9.4, "Configuring IPv4 Settings" ; or, IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 10.3.9.5, "Configuring IPv6 Settings" . Configuring the Bond Tab If you have already added a new bond connection (see Procedure 10.9, "Adding a New Bond Connection" for instructions), you can edit the Bond tab to set the load sharing mode and the type of link monitoring to use to detect failures of a slave connection. Mode The mode that is used to share traffic over the slave connections which make up the bond. The default is Round-robin . Other load sharing modes, such as 802.3ad , can be selected by means of the drop-down list. Link Monitoring The method of monitoring the slaves ability to carry network traffic. The following modes of load sharing are selectable from the Mode drop-down list: Round-robin Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available. This mode might not work behind a bridge with virtual machines without additional switch configuration. Active backup Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails. Note that this is the only mode available for bonds of InfiniBand devices. XOR Sets an XOR (exclusive-or) policy. Transmissions are based on the selected hash policy. The default is to derive a hash by XOR of the source and destination MAC addresses multiplied by the modulo of the number of slave interfaces. In this mode traffic destined for specific peers will always be sent over the same interface. As the destination is determined by the MAC addresses this method works best for traffic to peers on the same link or local network. If traffic has to pass through a single router then this mode of traffic balancing will be suboptimal. Broadcast Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces. This mode might not work behind a bridge with virtual machines without additional switch configuration. 802.3ad Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a network switch that is 802.3ad compliant. Adaptive transmit load balancing Sets an adaptive Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. Adaptive load balancing Sets an Adaptive Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPv4 traffic. Receive load balancing is achieved through ARP negotiation. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. The following types of link monitoring can be selected from the Link Monitoring drop-down list. It is a good idea to test which channel bonding module parameters work best for your bonded interfaces. MII (Media Independent Interface) The state of the carrier wave of the interface is monitored. This can be done by querying the driver, by querying MII registers directly, or by using ethtool to query the device. Three options are available: Monitoring Frequency The time interval, in milliseconds, between querying the driver or MII registers. Link up delay The time in milliseconds to wait before attempting to use a link that has been reported as up. This delay can be used if some gratuitous ARP requests are lost in the period immediately following the link being reported as " up " . This can happen during switch initialization for example. Link down delay The time in milliseconds to wait before changing to another link when a previously active link has been reported as " down " . This delay can be used if an attached switch takes a relatively long time to change to backup mode. ARP The address resolution protocol ( ARP ) is used to probe one or more peers to determine how well the link-layer connections are working. It is dependent on the device driver providing the transmit start time and the last receive time. Two options are available: Monitoring Frequency The time interval, in milliseconds, between sending ARP requests. ARP targets A comma separated list of IP addresses to send ARP requests to. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-establishing_a_bond_connection |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_properties_file/making-open-source-more-inclusive |
7.3. Network Bonding Using the NetworkManager Command Line Tool, nmcli | 7.3. Network Bonding Using the NetworkManager Command Line Tool, nmcli Note See Section 3.3, "Configuring IP Networking with nmcli" for an introduction to nmcli . To create a bond connection with the nmcli tool, issue the following command: Note that as no con-name was given for the bond, the connection name was derived from the interface name by prepending the type. NetworkManager supports most of the bonding options provided by the kernel. For example: To add a port interface: Create a new connection, see Section 3.3.5, "Creating and Modifying a Connection Profile with nmcli" for details. Set the controller property to the bond interface name, or to the name of the controller connection: To add a new port interface, repeat the command with the new interface. For example: To activate the ports, issue a command as follows: When you activate a port, the controller connection also starts. You can see Section 7.1, "Understanding the Default Behavior of Controller and Port Interfaces" for more information. In this case, it is not necessary to manually activate the controller connection. It is possible to change the active_slave option and the primary option of the bond at runtime, without deactivating the connection. For example to change the active_slave option, issue the following command: or to change the primary option: Note The active_slave option sets the currently active port whereas the primary option of the bond specifies the active port to be automatically selected by kernel when a new port is added or a failure of the active port occurs. | [
"~]USD nmcli con add type bond ifname mybond0 Connection 'bond-mybond0' (5f739690-47e8-444b-9620-1895316a28ba) successfully added.",
"~]USD nmcli con add type bond ifname mybond0 bond.options \"mode=balance-rr,miimon=100\" Connection 'bond-mybond0' (5f739690-47e8-444b-9620-1895316a28ba) successfully added.",
"~]USD nmcli con add type ethernet ifname ens3 master mybond0 Connection 'bond-slave-ens3' (220f99c6-ee0a-42a1-820e-454cbabc2618) successfully added.",
"~]USD nmcli con add type ethernet ifname ens7 master mybond0 Connection 'bond-slave-ens7' (ecc24c75-1c89-401f-90c8-9706531e0231) successfully added.",
"~]USD nmcli con up bond-slave- ens7 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/14)",
"~]USD nmcli con up bond-slave- ens3 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/15)",
"~]USD nmcli dev mod bond0 +bond.options \"active_slave= ens7 \" Connection successfully reapplied to device 'bond0'.",
"~]USD nmcli dev mod bond0 +bond.options \"primary= ens3 \" Connection successfully reapplied to device 'bond0'."
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-network_bonding_using_the_networkmanager_command_line_tool_nmcli |
Installing on a single node | Installing on a single node OpenShift Container Platform 4.13 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team | [
"example.com",
"<cluster_name>.example.com",
"export OCP_VERSION=<ocp_version> 1",
"export ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'",
"coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso",
"./openshift-install --dir=ocp wait-for install-complete",
"export KUBECONFIG=ocp/auth/kubeconfig",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.26.0",
"dd if=<path_to_iso> of=<path_to_usb> status=progress",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia",
"curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"variant: openshift version: 4.13.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'",
"butane -pr embedded.yaml -o embedded.ign",
"coreos-installer iso ignition embed -i embedded.ign rhcos-4.13.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.13.0-x86_64-live.x86_64.iso",
"coreos-installer iso ignition show rhcos-sshd-4.13.0-x86_64-live.x86_64.iso",
"{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/installing_on_a_single_node/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/making-open-source-more-inclusive |
18.4. Saving iptables Rules | 18.4. Saving iptables Rules Rules created with the iptables command are stored in memory. If the system is restarted before saving the iptables rule set, all rules are lost. For netfilter rules to persist through system reboot, they need to be saved. To do this, log in as root and type: This executes the iptables initscript, which runs the /sbin/iptables-save program and writes the current iptables configuration to /etc/sysconfig/iptables . The existing /etc/sysconfig/iptables file is saved as /etc/sysconfig/iptables.save . The time the system boots, the iptables init script reapplies the rules saved in /etc/sysconfig/iptables by using the /sbin/iptables-restore command. While it is always a good idea to test a new iptables rule before committing it to the /etc/sysconfig/iptables file, it is possible to copy iptables rules into this file from another system's version of this file. This provides a quick way to distribute sets of iptables rules to multiple machines. Important If distributing the /etc/sysconfig/iptables file to other machines, type /sbin/service iptables restart for the new rules to take effect. | [
"/sbin/service iptables save"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-iptables-saving |
Chapter 1. Support policy for Red Hat build of OpenJDK | Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.6/rn-openjdk-support-policy |
Chapter 40. JMS - IBM MQ Kamelet Source | Chapter 40. JMS - IBM MQ Kamelet Source A Kamelet that can read events from an IBM MQ message queue using JMS. 40.1. Configuration Options The following table summarizes the configuration options available for the jms-ibm-mq-source Kamelet: Property Name Description Type Default Example channel * IBM MQ Channel Name of the IBM MQ Channel string destinationName * Destination Name The destination name string password * Password Password to authenticate to IBM MQ server string queueManager * IBM MQ Queue Manager Name of the IBM MQ Queue Manager string serverName * IBM MQ Server name IBM MQ Server name or address string serverPort * IBM MQ Server Port IBM MQ Server port integer 1414 username * Username Username to authenticate to IBM MQ server string clientId IBM MQ Client ID Name of the IBM MQ Client ID string destinationType Destination Type The JMS destination type (queue or topic) string "queue" Note Fields marked with an asterisk (*) are mandatory. 40.2. Dependencies At runtime, the jms-ibm-mq-source Kamelet relies upon the presence of the following dependencies: camel:jms camel:kamelet mvn:com.ibm.mq:com.ibm.mq.allclient:9.2.5.0 40.3. Usage This section describes how you can use the jms-ibm-mq-source . 40.3.1. Knative Source You can use the jms-ibm-mq-source Kamelet as a Knative source by binding it to a Knative object. jms-ibm-mq-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 40.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 40.3.1.2. Procedure for using the cluster CLI Save the jms-ibm-mq-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jms-ibm-mq-source-binding.yaml 40.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 40.3.2. Kafka Source You can use the jms-ibm-mq-source Kamelet as a Kafka source by binding it to a Kafka topic. jms-ibm-mq-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: "10.103.41.245" serverPort: "1414" destinationType: "queue" destinationName: "DEV.QUEUE.1" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 40.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 40.3.2.2. Procedure for using the cluster CLI Save the jms-ibm-mq-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jms-ibm-mq-source-binding.yaml 40.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 40.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jms-ibm-mq-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f jms-ibm-mq-source-binding.yaml",
"kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-ibm-mq-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-ibm-mq-source properties: serverName: \"10.103.41.245\" serverPort: \"1414\" destinationType: \"queue\" destinationName: \"DEV.QUEUE.1\" queueManager: QM1 channel: DEV.APP.SVRCONN username: app password: passw0rd sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f jms-ibm-mq-source-binding.yaml",
"kamel bind --name jms-ibm-mq-source-binding 'jms-ibm-mq-source?serverName=10.103.41.245&serverPort=1414&destinationType=queue&destinationName=DEV.QUEUE.1&queueManager=QM1&channel=DEV.APP.SVRCONN&username=app&password=passw0rd' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jms-ibm-mq-source |
Chapter 3. Binding [v1] | Chapter 3. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object Required target 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata target object ObjectReference contains enough information to let you inspect or modify the referred object. 3.1.1. .target Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/bindings POST : create a Binding /api/v1/namespaces/{namespace}/pods/{name}/binding POST : create binding of a Pod 3.2.1. /api/v1/namespaces/{namespace}/bindings Table 3.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a Binding Table 3.3. Body parameters Parameter Type Description body Binding schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty 3.2.2. /api/v1/namespaces/{namespace}/pods/{name}/binding Table 3.5. Global path parameters Parameter Type Description name string name of the Binding namespace string object name and auth scope, such as for teams and projects Table 3.6. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create binding of a Pod Table 3.7. Body parameters Parameter Type Description body Binding schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/metadata_apis/binding-v1 |
5.333. tog-pegasus | 5.333. tog-pegasus 5.333.1. RHBA-2012:0953 - tog-pegasus bug fix update Updated tog-pegasus packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The tog-pegasus packages provide OpenPegasus Web-Based Enterprise Management (WBEM) services for Linux. WBEM enables management solutions that deliver increased control of enterprise resources. WBEM is a platform and resource independent Distributed Management Task Force (DMTF) standard that defines a common information model (CIM) and communication protocol for monitoring and controlling resources from diverse sources. Bug Fixes BZ# 796191 Previously, with the Single Chunk Memory Objects (SCMO) implementation, empty string values in embedded instances were converted to null values during the embedded CIMInstance to SCMOInstance conversion. This was due to the usage of the _setString() function that set the string size to 0 if the string was empty. This broke functionality of the existing providers. A backported upstream patch uses the _SetBinary() function instead which is already used while setting the string values on the normal SCMOInstance. BZ# 799040 Previously, the tog-pegasus packages did not provide a generic "cim-server", which could be required by packages that do not need a specific implementation of the CIM server as a dependency. With this update, the tog-pegasus packages provide a generic "cim-server" that can be required by such packages. All users of tog-pegasus are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/tog-pegasus |
Chapter 1. Preparing to install on Azure | Chapter 1. Preparing to install on Azure 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on Azure Before installing OpenShift Container Platform on Microsoft Azure, you must configure an Azure account. See Configuring an Azure account for details about account configuration, account limits, public DNS zone configuration, required roles, creating service principals, and supported Azure regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Alternatives to storing administrator-level secrets in the kube-system project for other options. 1.3. Choosing a method to install OpenShift Container Platform on Azure You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Azure : You can install OpenShift Container Platform on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Azure : You can install a customized cluster on Azure infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on Azure with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on Azure into an existing VNet : You can install OpenShift Container Platform on an existing Azure Virtual Network (VNet) on Azure. You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on Azure : You can install a private cluster into an existing Azure Virtual Network (VNet) on Azure. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on Azure into a government region : OpenShift Container Platform can be deployed into Microsoft Azure Government (MAG) regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure infrastructure that you provision, by using the following method: Installing a cluster on Azure using ARM templates : You can install OpenShift Container Platform on Azure by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure account | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure/preparing-to-install-on-azure |
9.2. Run Red Hat JBoss Data Virtualization in a Google Compute Instance | 9.2. Run Red Hat JBoss Data Virtualization in a Google Compute Instance Procedure 9.2. Run Red Hat JBoss Data Virtualization in a Google Compute Instance Open the necessary ports: Google Developers Console -> Compute -> Compute Engine -> VM Instance -> [name of your instance] -> Network . Upload your public SSH key: Google Developers Console - Compute -> Compute Engine -> VM Instance -> [name of your instance] -> SSH Keys . Bind the management ports (jboss.bind.address.management) to an external interface. (The default value for management ports is localhost .) | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/run_red_hat_jboss_data_virtualization_in_a_google_compute_instance |
Chapter 1. About Serverless | Chapter 1. About Serverless 1.1. OpenShift Serverless overview OpenShift Serverless provides Kubernetes native building blocks that enable developers to create and deploy serverless, event-driven applications on OpenShift Container Platform. OpenShift Serverless is based on the open source Knative project , which provides portability and consistency for hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. Note Because OpenShift Serverless releases on a different cadence from OpenShift Container Platform, the OpenShift Serverless documentation is now available as a separate documentation set at Red Hat OpenShift Serverless . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/serverless/about-serverless |
4.3. Adding Hosts | 4.3. Adding Hosts Each diskless client must have its own snapshot directory on the NFS server that is used as its read/write file system. The Network Booting Tool can be used to create these snapshot directories. After completing the steps in Section 4.2, "Finish Configuring the Diskless Environment" , a window appears to allow hosts to be added for the diskless environment. Click the New button. In the dialog shown in Figure 4.1, "Add Diskless Host" , provide the following information: Hostname or IP Address/Subnet - Specify the hostname or IP address of a system to add it as a host for the diskless environment. Enter a subnet to specify a group of systems. Operating System - Select the diskless environment for the host or subnet of hosts. Serial Console - Select this checkbox to perform a serial installation. Snapshot name - Provide a subdirectory name to be used to store all of the read/write content for the host. Ethernet - Select the Ethernet device on the host to use to mount the diskless environment. If the host only has one Ethernet card, select eth0 . Ignore the Kickstart File option. It is only used for PXE installations. Figure 4.1. Add Diskless Host In the existing snapshot/ directory in the diskless directory, a subdirectory is created with the Snapshot name specified as the file name. Then, all of the files listed in snapshot/files and snapshot/files.custom are copied copy from the root/ directory to this new directory. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/diskless_environments-adding_hosts |
8.4. Migrating NIS Domains to IdM | 8.4. Migrating NIS Domains to IdM If you are managing a Linux environment and want to migrate disparate NIS domains with different UIDs and GIDs into a modern identity management solution, you can use ID views to set host specific UIDs and GIDs for existing hosts to prevent changing the permissions on existing files and directories. The process for the migration follows these steps: Create the users and groups in the IdM domain. For details, see Adding Stage or Active Users Adding and Removing User Groups Use ID views for existing hosts to override the IDs IdM generated during the user creation: Create an individual ID view. Add ID overrides for the users and groups to the ID view. Assign the ID view to the specific hosts. For details, see Defining a Different Attribute Value for a User Account on Different Hosts . Installing and Uninstalling Identity Management Clients in the Linux Domain Identity, Authentication, and Policy Guide . Decommission the NIS domains. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/id-views-nis |
Chapter 1. HawtIO release notes | Chapter 1. HawtIO release notes This chapter provides release information about HawtIO Diagnostic Console Guide. 1.1. HawtIO features HawtIO Diagnostic Console is available as a Technology Preview component in the HawtIO Diagnostic Console Guide 4.0.0. Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope . The HawtIO Technology Preview includes the following main features: Runtime management of JVM via JMX, especially that of Camel applications and AMQ broker, with specialised views Visualisation and debugging/tracing of Camel routes Simple managing and monitoring of application metrics 1.1.1. Platform and core component versions The versions of Red Hat build of HawtIO 4.0.0 TP1 should be: Red Hat build of Apache Camel Version BOM 4.0.2 for Spring Boot 3.1.6 camel-spring-boot-bom/4.0.0.redhat-00039 4.0.0 for Quarkus 3.2.0 quarkus-bom/3.2.9.Final-redhat-00003 HawtIO Console 4.0.0 HawtIO for OpenShift 2.0.0 HawtIO for OpenShfit Operator 1.0.0 Jolokia 2.0.0 1.1.2. Technology Preview features UI plugins Connect JMX Camel Runtime Logs Quartz Spring Boot UI extension with custom plugins Authentication RBAC BASIC Authentication Spring Security Keycloak HawtIO Operator Managing HawtIO Online instances via HawtIO Custom Resources (CR) Addition of CR through the OpenShift Console; Addition of CR using CLI tools, eg. oc ; Deletion of CR through OpenShift Console or CLI results in removal of all owned HawtIO resources, inc. ConfigMaps, Deployments, ReplicationController etc.; Removal of operator-managed pod or other resource results in replacement being generated; Addition of property or modification of existing property, eg. CPU, Memory or custom configmap, results in new pod being deployed comprising the updated values Installation via Operator Hub Upgrade of operator is currently out of scope due to new product but will be required in subsequent releases; Successful installs via either the numbered (2.x) or the latest channels will result in the same operator version and operand being installed; Successful install of the operator through the catalog; Searching for HawtIO in the catalog will return both the product and community versions of the operator. Correct identification of the versions should be obvious. HawtIO Online With no credentials supplied, the application should redirect to the OpenShift authentication page The entering of correct OpenShift-supplied credentials should redirect back to the Discovery page of the application; The entering of incorrect OpenShift-supplied credentials should result in the user being instructed that logging-in cannot be completed; Discovery Only jolokia-enabled pods should be visible either in the same namespace (Namespace mode) or across the cluster (Cluster mode); Pods should display the correct status (up or down) through their status icons; Only those pods that have a working status should be capable of connection (connect button visible); The OpenShift console URL should have been populated by the startup scripts of HawtIO. Therefore, all labels relating to a feature accessible in the OpenShift console should have hyperlinks that open to the respective console target; The OpenShift console should be accessible from a link in the dropdown menu in the head bar of the application; All jolokia-enabled apps should have links listed in the dropdown menu in the head bar of the application; Connection to HawtIO-enabled applications Clicking the Connect button to a pod in the Discovery page should open a new window/tab and 'connect' to the destination app. This should manifest as the HawtIO Online UI showing plugin names vertically down the left sidebar, eg. JMX, and details of the respective focused plugin displayed in the remainder of the page; Failure to connect to a pod should present the user with some kind of error message; Once connected, all features listed in the 'UI Plugins' (above) should be available for testing where applicable to the target application. 1.1.3. HawtIO known issues The following issue remain with HawtIO for this release: HAWNG-147 Fuse web console - support both RH-SSO and Properties login When Keycloak/RH-SSO is configured for web console authentication, the user is automatically redirected to the Keycloak login page. There is no option for the user to attempt local/properties authentication, even if that JAAS module is also configured. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/release_notes_for_hawtio_diagnostic_console_guide/camel-hawtio-release-notes_hawtio |
A.2. VDSM Hooks | A.2. VDSM Hooks VDSM is extensible via hooks. Hooks are scripts executed on the host when key events occur. When a supported event occurs VDSM runs any executable hook scripts in /usr/libexec/vdsm/hooks/ nn_event-name / on the host in alphanumeric order. By convention each hook script is assigned a two digit number, included at the front of the file name, to ensure that the order in which the scripts will be run in is clear. You are able to create hook scripts in any programming language, Python will however be used for the examples contained in this chapter. Note that all scripts defined on the host for the event are executed. If you require that a given hook is only executed for a subset of the virtual machines which run on the host then you must ensure that the hook script itself handles this requirement by evaluating the Custom Properties associated with the virtual machine. Warning VDSM hooks can interfere with the operation of Red Hat Virtualization. A bug in a VDSM hook has the potential to cause virtual machine crashes and loss of data. VDSM hooks should be implemented with caution and tested rigorously. The Hooks API is new and subject to significant change in the future. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/vdsm_hooks |
Chapter 150. XML Tokenize | Chapter 150. XML Tokenize The XML Tokenize language is a built-in language in camel-xml-jaxp , which is a truly XML-aware tokenizer that can be used with the Split EIP as the conventional Tokenize to efficiently and effectively tokenize XML documents.. XML Tokenize is capable of not only recognizing XML namespaces and hierarchical structures of the document but also more efficiently tokenizing XML documents than the conventional Tokenize language. 150.1. Dependencies When using xtokenize with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xml-jaxp-starter</artifactId> </dependency> Additional dependency In order to use this component, an additional dependency is required as follows: <dependency> <groupId>org.codehaus.woodstox</groupId> <artifactId>woodstox-core-asl</artifactId> <version>4.4.1</version> </dependency> or <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stax-starter</artifactId> </dependency> 150.2. XML Tokenizer Options The XML Tokenize language supports 4 options, which are listed below. Name Default Java Type Description headerName String Name of header to tokenize instead of using the message body. mode Enum The extraction mode. The available extraction modes are: i - injecting the contextual namespace bindings into the extracted token (default) w - wrapping the extracted token in its ancestor context u - unwrapping the extracted token to its child content t - extracting the text content of the specified element. Enum values: i w u t group Integer To group N parts together. trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 150.3. Example See Split EIP which has examples using the XML Tokenize language. 150.4. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.language.xtokenize.enabled Whether to enable auto configuration of the xtokenize language. This is enabled by default. Boolean camel.language.xtokenize.mode The extraction mode. The available extraction modes are: i - injecting the contextual namespace bindings into the extracted token (default) w - wrapping the extracted token in its ancestor context u - unwrapping the extracted token to its child content t - extracting the text content of the specified element. String camel.language.xtokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xml-jaxp-starter</artifactId> </dependency>",
"<dependency> <groupId>org.codehaus.woodstox</groupId> <artifactId>woodstox-core-asl</artifactId> <version>4.4.1</version> </dependency>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stax-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-xml-tokenize-language-starter |
Chapter 9. Reference | Chapter 9. Reference 9.1. MicroProfile Config reference 9.1.1. Default MicroProfile Config attributes The MicroProfile Config specification defines three ConfigSources by default. ConfigSources are sorted according to their ordinal number. If a configuration must be overwritten for a later deployment, the lower ordinal ConfigSource is overwritten before a higher ordinal ConfigSource . Table 9.1. Default MicroProfile Config attributes ConfigSource Ordinal System properties 400 Environment variables 300 Property files META-INF/microprofile-config.properties found on the classpath 100 9.1.2. MicroProfile Config SmallRye ConfigSources The microprofile-config-smallrye project defines more ConfigSources you can use in addition to the default MicroProfile Config ConfigSources . Table 9.2. Additional MicroProfile Config attributes ConfigSource Ordinal config-source in the Subsystem 100 ConfigSource from the Directory 100 ConfigSource from Class 100 An explicit ordinal is not specified for these ConfigSources . They inherit the default ordinal value found in the MicroProfile Config specification. 9.2. MicroProfile Fault Tolerance reference 9.2.1. MicroProfile Fault Tolerance configuration properties SmallRye Fault Tolerance specification defines the following properties in addition to the properties defined in the MicroProfile Fault Tolerance specification. Table 9.3. MicroProfile Fault Tolerance configuration properties Property Default value Description io.smallrye.faulttolerance.globalThreadPoolSize 100 Number of threads used by the fault tolerance mechanisms. This does not include bulkhead thread pools. io.smallrye.faulttolerance.timeoutExecutorThreads 5 Size of the thread pool used for scheduling timeouts. 9.3. MicroProfile JWT reference 9.3.1. MicroProfile Config JWT standard properties The microprofile-jwt-smallrye subsystem supports the following MicroProfile Config standard properties. Table 9.4. MicroProfile Config JWT standard properties Property Default Description mp.jwt.verify.publickey NONE String representation of the public key encoded using one of the supported formats. Do not set if you have set mp.jwt.verify.publickey.location . mp.jwt.verify.publickey.location NONE The location of the public key, may be a relative path or URL. Do not be set if you have set mp.jwt.verify.publickey . mp.jwt.verify.issuer NONE The expected value of any iss claim of any JWT token being validated. Example microprofile-config.properties configuration: 9.4. MicroProfile OpenAPI reference 9.4.1. MicroProfile OpenAPI configuration properties In addition to the standard MicroProfile OpenAPI configuration properties, JBoss EAP supports the following additional MicroProfile OpenAPI properties. These properties can be applied in both the global and the application scope. Table 9.5. MicroProfile OpenAPI properties in JBoss EAP Property Default value Description mp.openapi.extensions.enabled true Enables or disables registration of an OpenAPI endpoint. When set to false , disables generation of OpenAPI documentation. You can set the value globally using the config subsystem, or for each application in a configuration file such as /META-INF/microprofile-config.properties . You can parameterize this property to selectively enable or disable microprofile-openapi-smallrye in different environments, such as production or development. You can use this property to control which application associated with a given virtual host should generate a MicroProfile OpenAPI model. mp.openapi.extensions.path /openapi You can use this property for generating OpenAPI documentation for multiple applications associated with a virtual host. Set a distinct mp.openapi.extensions.path on each application associated with the same virtual host. mp.openapi.extensions.servers.relative true Indicates whether auto-generated server records are absolute or relative to the location of the OpenAPI endpoint. Server records are necessary to ensure, in the presence of a non-root context path, that consumers of an OpenAPI document can construct valid URLs to REST services relative to the host of the OpenAPI endpoint. The value true indicates that the server records are relative to the location of the OpenAPI endpoint. The generated record contains the context path of the deployment. When set to false , JBoss EAP XP generates server records including all the protocols, hosts, and ports at which the deployment is accessible. | [
"mp.jwt.verify.publickey.location=META-INF/public.pem mp.jwt.verify.issuer=jwt-issuer"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/reference |
Chapter 3. Understanding SSSD and its benefits | Chapter 3. Understanding SSSD and its benefits The System Security Services Daemon (SSSD) is a system service to access remote directories and authentication mechanisms. The following chapters outline how SSSD works, what are the benefits of using it, how the configuration files are processed, as well as what identity and authentication providers you can configure. 3.1. How SSSD works The System Security Services Daemon (SSSD) is a system service that allows you to access remote directories and authentication mechanisms. You can connect a local system, an SSSD client , to an external back-end system, a provider . For example: An LDAP directory An Identity Management (IdM) domain An Active Directory (AD) domain A Kerberos realm SSSD works in two stages: It connects the client to a remote provider to retrieve identity and authentication information. It uses the obtained authentication information to create a local cache of users and credentials on the client. Users on the local system are then able to authenticate using the user accounts stored in the remote provider. SSSD does not create user accounts on the local system. However, SSSD can be configured to create home directories for IdM users. Once created, an IdM user home directory and its contents on the client are not deleted when the user logs out. Figure 3.1. How SSSD works SSSD can also provide caches for several system services, such as Name Service Switch (NSS) or Pluggable Authentication Modules (PAM). Note Only use the SSSD service for caching user information. Running both Name Service Caching Daemon (NSCD) and SSSD for caching on the same system might lead to performance issues and conflicts. 3.2. Benefits of using SSSD Using the System Security Services Daemon (SSSD) provides multiple benefits regarding user identity retrieval and user authentication. Offline authentication SSSD optionally keeps a cache of user identities and credentials retrieved from remote providers. In this setup, a user - provided they have already authenticated once against the remote provider at the start of the session - can successfully authenticate to resources even if the remote provider or the client are offline. A single user account: improved consistency of the authentication process With SSSD, it is not necessary to maintain both a central account and a local user account for offline authentication. The conditions are: In a particular session, the user must have logged in at least once: the client must be connected to the remote provider when the user logs in for the first time. Caching must be enabled in SSSD. Without SSSD, remote users often have multiple user accounts. For example, to connect to a virtual private network (VPN), remote users have one account for the local system and another account for the VPN system. In this scenario, you must first authenticate on the private network to fetch the user from the remote server and cache the user credentials locally. With SSSD, thanks to caching and offline authentication, remote users can connect to network resources simply by authenticating to their local machine. SSSD then maintains their network credentials. Reduced load on identity and authentication providers When requesting information, the clients first check the local SSSD cache. SSSD contacts the remote providers only if the information is not available in the cache. 3.3. Multiple SSSD configuration files on a per-client basis The default configuration file for SSSD is /etc/sssd/sssd.conf . Apart from this file, SSSD can read its configuration from all *.conf files in the /etc/sssd/conf.d/ directory. This combination allows you to use the default /etc/sssd/sssd.conf file on all clients and add additional settings in further configuration files to extend the functionality individually on a per-client basis. How SSSD processes the configuration files SSSD reads the configuration files in this order: The primary /etc/sssd/sssd.conf file Other *.conf files in /etc/sssd/conf.d/ , in alphabetical order If the same parameter appears in multiple configuration files, SSSD uses the last read parameter. Note SSSD does not read hidden files (files starting with . ) in the conf.d directory. 3.4. Identity and authentication providers for SSSD You can connect an SSSD client to the external identity and authentication providers, for example an LDAP directory, an Identity Management (IdM), Active Directory (AD) domain, or a Kerberos realm. The SSSD client then get access to identity and authentication remote services using the SSSD provider. You can configure SSSD to use different identity and authentication providers or a combination of them. Identity and Authentication Providers as SSSD domains Identity and authentication providers are configured as domains in the SSSD configuration file, /etc/sssd/sssd.conf . The providers are listed in the [domain/ name of the domain ] or [domain/default] section of the file. A single domain can be configured as one of the following providers: An identity provider , which supplies user information such as UID and GID. Specify a domain as the identity provider by using the id_provider option in the [domain/ name of the domain ] section of the /etc/sssd/sssd.conf file. An authentication provider , which handles authentication requests. Specify a domain as the authentication provider by using the auth_provider option in the [domain/ name of the domain ] section of /etc/sssd/sssd.conf . An access control provider , which handles authorization requests. Specify a domain as the access control provider using the access_provider option in the [domain/ name of the domain ] section of /etc/sssd/sssd.conf . By default, the option is set to permit , which always allows all access. See the sssd.conf (5) man page for details. A combination of these providers, for example if all the corresponding operations are performed within a single server. In this case, the id_provider , auth_provider , and access_provider options are all listed in the same [domain/ name of the domain ] or [domain/default] section of /etc/sssd/sssd.conf . Note You can configure multiple domains for SSSD. You must configure at least one domain, otherwise SSSD will not start. Proxy Providers A proxy provider works as an intermediary relay between SSSD and resources that SSSD would otherwise not be able to use. When using a proxy provider, SSSD connects to the proxy service, and the proxy loads the specified libraries. You can configure SSSD to use a proxy provider to enable: Alternative authentication methods, such as a fingerprint scanner Legacy systems, such as NIS A local system account defined in the /etc/passwd file as an identity provider and a remote authentication provider, for example Kerberos Authentication of local users using smart cards Available Combinations of Identity and Authentication Providers You can configure SSSD to use the following combinations of identity and authentication providers. Table 3.1. Available Combinations of Identity and Authentication Providers Identity Provider Authentication Provider Identity Management [a] Identity Management Active Directory Active Directory LDAP LDAP LDAP Kerberos Proxy Proxy Proxy LDAP Proxy Kerberos [a] An extension of the LDAP provider type. Additional resources Configuring user authentication using authselect Querying domain information using SSSD [1] Reporting on user access on hosts using SSSD [1] To list and verify the status of the domains using the sssctl utility, your host should be enrolled in Identity Management (IdM) that is in a trust agreement with an Active Directory (AD) forest. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/understanding-sssd-and-its-benefits_configuring-authentication-and-authorization-in-rhel |
Installing on AWS | Installing on AWS OpenShift Container Platform 4.13 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_aws/index |
function::discard | function::discard Name function::discard - Discard all output related to a speculation buffer Synopsis Arguments id of the buffer to store the information in | [
"discard(id:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-discard |
Chapter 11. Managing container images | Chapter 11. Managing container images With Satellite, you can import container images from various sources and distribute them to external containers using content views. For information about containers for Red Hat Enterprise Linux Atomic Host 7, see Getting Started with Containers in Red Hat Enterprise Linux Atomic Host 7 . For information about containers for Red Hat Enterprise Linux 8, see Building, running, and managing containers in Red Hat Enterprise Linux 8 . For information about containers for Red Hat Enterprise Linux 9, see Building, running, and managing containers in Red Hat Enterprise Linux 9 . 11.1. Importing container images You can import container image repositories from Red Hat Registry or from other image registries. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure with repository discovery In the Satellite web UI, navigate to Content > Products and click Repo Discovery . From the Repository Type list, select Container Images . In the Registry to Discover field, enter the URL of the registry to import images from. In the Registry Username field, enter the name that corresponds with your user name for the container image registry. In the Registry Password field, enter the password that corresponds with the user name that you enter. In the Registry Search Parameter field, enter any search criteria that you want to use to filter your search, and then click Discover . Optional: To further refine the Discovered Repository list, in the Filter field, enter any additional search criteria that you want to use. From the Discovered Repository list, select any repositories that you want to import, and then click Create Selected . Optional: To change the download policy for this container repository to on demand , see Section 4.11, "Changing the download policy for a repository" . Optional: If you want to create a product, from the Product list, select New Product . In the Name field, enter a product name. Optional: In the Repository Name and Repository Label columns, you can edit the repository names and labels. Click Run Repository Creation . When repository creation is complete, you can click each new repository to view more information. Optional: To filter the content you import to a repository, click a repository, and then navigate to Limit Sync Tags . Click to edit, and add any tags that you want to limit the content that synchronizes to Satellite. In the Satellite web UI, navigate to Content > Products and select the name of your product. Select the new repositories and then click Sync Now to start the synchronization process. Procedure with creating a repository manually In the Satellite web UI, navigate to Content > Products . Click the name of the required product. Click New repository . From the Type list, select docker . Enter the details for the repository, and click Save . Select the new repository, and click Sync Now . steps To view the progress of the synchronization, navigate to Content > Sync Status and expand the repository tree. When the synchronization completes, you can click Container Image Manifests to list the available manifests. From the list, you can also remove any manifests that you do not require. CLI procedure Create the custom Red Hat Container Catalog product: Create the repository for the container images: Synchronize the repository: Additional resources For more information about creating a product and repository manually, see Chapter 4, Importing content . 11.2. Managing container name patterns When you use Satellite to create and manage your containers, as the container moves through content view versions and different stages of the Satellite lifecycle environment, the container name changes at each stage. For example, if you synchronize a container image with the name ssh from an upstream repository, when you add it to a Satellite product and organization and then publish as part of a content view, the container image can have the following name: my_organization_production-custom_spin-my_product-custom_ssh . This can create problems when you want to pull a container image because container registries can contain only one instance of a container name. To avoid problems with Satellite naming conventions, you can set a registry name pattern to override the default name to ensure that your container name is clear for future use. Limitations If you use a registry name pattern to manage container naming conventions, because registry naming patterns must generate globally unique names, you might experience naming conflict problems. For example: If you set the repository.docker_upstream_name registry name pattern, you cannot publish or promote content views with container content with identical repository names to the Production lifecycle. If you set the lifecycle_environment.name registry name pattern, this can prevent the creation of a second container repository with the identical name. You must proceed with caution when defining registry naming patterns for your containers. Procedure To manage container naming with a registry name pattern, complete the following steps: In the Satellite web UI, navigate to Content > Lifecycle > Lifecycle Environments . Create a lifecycle environment or select an existing lifecycle environment to edit. In the Container Image Registry area, click the edit icon to the right of Registry Name Pattern area. Use the list of variables and examples to determine which registry name pattern you require. In the Registry Name Pattern field, enter the registry name pattern that you want to use. For example, to use the repository.docker_upstream_name : Click Save . 11.3. Managing container registry authentication You can manage the authentication settings for accessing containers images from Satellite. By default, users must authenticate to access containers images in Satellite. You can specify whether you want users to authenticate to access container images in Satellite in a lifecycle environment. For example, you might want to permit users to access container images from the Production lifecycle without any authentication requirement and restrict access the Development and QA environments to authenticated users. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Lifecycle Environments . Select the lifecycle environment that you want to manage authentication for. To permit unauthenticated access to the containers in this lifecycle environment, select the Unauthenticated Pull checkbox. To restrict unauthenticated access, clear the Unauthenticated Pull checkbox. Click Save . 11.4. Configuring Podman and Docker to trust the certificate authority Podman uses two paths to locate the CA file, namely, /etc/containers/certs.d/ and /etc/docker/certs.d/ . Copy the root CA file to one of these locations, with the exact path determined by the server hostname, and naming the file ca.crt In the following examples, replace hostname.example.com with satellite.example.com or capsule.example.com , depending on your use case. You might first need to create the relevant location using: or For podman, use: Alternatively, if you are using Docker, copy the root CA file to the equivalent Docker directory: You no longer need to use the --tls-verify=false option when logging in to the registry: 11.5. Using container registries Podman and Docker can be used to fetch content from container registries. Container registries on Capsules On Capsules with content, the Container Gateway Capsule plugin acts as the container registry. It caches authentication information from Katello and proxies incoming requests to Pulp. The Container Gateway is available by default on Capsules with content. Procedure Logging in to the container registry: Listing container images: Pulling container images: | [
"hammer product create --description \" My_Description \" --name \"Red Hat Container Catalog\" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"",
"hammer repository create --content-type \"docker\" --docker-upstream-name \"rhel7\" --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\" --url \"http://registry.access.redhat.com/\"",
"hammer repository synchronize --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\"",
"<%= repository.docker_upstream_name %>",
"mkdir -p /etc/containers/certs.d/hostname.example.com",
"mkdir -p /etc/docker/certs.d/hostname.example.com",
"cp rootCA.pem /etc/containers/certs.d/hostname.example.com/ca.crt",
"cp rootCA.pem /etc/docker/certs.d/hostname.example.com/ca.crt",
"podman login hostname.example.com Username: admin Password: Login Succeeded!",
"podman login satellite.example.com",
"podman search satellite.example.com/",
"podman pull satellite.example.com/my-image:<optional_tag>"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/Managing_Container_Images_content-management |
20.16.9.15. Setting VLAN tag (on supported network types only) | 20.16.9.15. Setting VLAN tag (on supported network types only) To specify the VLAN tag configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='bridge'> <vlan> <tag id='42'/> </vlan> <source bridge='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> <devices> ... Figure 20.52. Setting VLAN tag (on supported network types only) If (and only if) the network connection used by the guest virtual machine supports vlan tagging transparent to the guest virtual machine, an optional vlan element can specify one or more vlan tags to apply to the guest virtual machine's network traffic (openvswitch and type='hostdev' SR-IOV interfaces do support transparent vlan tagging of guest virtual machine traffic; everything else, including standard Linux bridges and libvirt's own virtual networks, do not support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches provide their own way (outside of libvirt) to tag guest virtual machine traffic onto specific vlans.) To allow for specification of multiple tags (in the case of vlan trunking), a subelement, tag , specifies which vlan tag to use (for example: tag id='42'/ . If an interface has more than one vlan element defined, it is assumed that the user wants to do VLAN trunking using all the specified tags. In the case that vlan trunking with a single tag is desired, the optional attribute trunk='yes' can be added to the toplevel vlan element. | [
"<devices> <interface type='bridge'> <vlan> <tag id='42'/> </vlan> <source bridge='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> <devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-setting-vlan-tag |
Chapter 10. Live migration | Chapter 10. Live migration 10.1. Virtual machine live migration 10.1.1. About live migration Live migration is the process of moving a running virtual machine instance (VMI) to another node in the cluster without interrupting the virtual workload or access. If a VMI uses the LiveMigrate eviction strategy, it automatically migrates when the node that the VMI runs on is placed into maintenance mode. You can also manually start live migration by selecting a VMI to migrate. You can use live migration if the following conditions are met: Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 10.1.2. Additional resources Migrating a virtual machine instance to another node Live migration limiting Customizing the storage profile 10.2. Live migration limits and timeouts Apply live migration limits and timeouts so that migration processes do not overwhelm the cluster. Configure these settings by editing the HyperConverged custom resource (CR). 10.2.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters. USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 1 In this example, the spec.liveMigrationConfig array contains the default values for each field. Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 10.2.2. Cluster-wide live migration limits and timeouts Table 10.1. Migration parameters Parameter Description Default parallelMigrationsPerCluster Number of migrations running in parallel in the cluster. 5 parallelOutboundMigrationsPerNode Maximum number of outbound migrations per node. 2 bandwidthPerMigration Bandwidth limit of each migration, in MiB/s. 0 [1] completionTimeoutPerGiB The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a virtual machine instance with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 800 progressTimeout The migration is canceled if memory copy fails to make progress in this time, in seconds. 150 The default value of 0 is unlimited. 10.3. Migrating a virtual machine instance to another node Manually initiate a live migration of a virtual machine instance to another node using either the web console or the CLI. Note If a virtual machine uses a host model CPU, you can perform live migration of that virtual machine only between nodes that support its host CPU model. 10.3.1. Initiating live migration of a virtual machine instance in the web console Migrate a running virtual machine instance to a different node in the cluster. Note The Migrate action is visible to all users but only admin users can initiate a virtual machine migration. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. You can initiate the migration from this page, which makes it easier to perform actions on multiple virtual machines on the same page, or from the VirtualMachine details page where you can view comprehensive details of the selected virtual machine: Click the Options menu to the virtual machine and select Migrate . Click the virtual machine name to open the VirtualMachine details page and click Actions Migrate . Click Migrate to migrate the virtual machine to another node. 10.3.2. Initiating live migration of a virtual machine instance in the CLI Initiate a live migration of a running virtual machine instance by creating a VirtualMachineInstanceMigration object in the cluster and referencing the name of the virtual machine instance. Procedure Create a VirtualMachineInstanceMigration configuration file for the virtual machine instance to migrate. For example, vmi-migrate.yaml : apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora Create the object in the cluster by running the following command: USD oc create -f vmi-migrate.yaml The VirtualMachineInstanceMigration object triggers a live migration of the virtual machine instance. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. Additional resources: Monitoring live migration of a virtual machine instance Cancelling the live migration of a virtual machine instance 10.4. Migrating a virtual machine over a dedicated additional network You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 10.4.1. Configuring a dedicated secondary network for virtual machine live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition for a namespace by using the CLI. Then, add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. The Multus Container Network Interface (CNI) plugin is installed on the cluster. Every node on the cluster has at least two Network Interface Cards (NICs), and the NICs to be used for live migration are connected to the same VLAN. The virtual machine (VM) is running with the LiveMigrate eviction strategy. Procedure Create a NetworkAttachmentDefinition manifest. Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }' 1 The name of the NetworkAttachmentDefinition object. 2 The name of the NIC to be used for live migration. 3 The name of the CNI plugin that provides the network for this network attachment definition. 4 The IP address range for the secondary network. This range must not have any overlap with the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR. For example: Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 ... 1 The name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 10.4.2. Additional resources Live migration limits and timeouts 10.5. Monitoring live migration of a virtual machine instance You can monitor the progress of a live migration of a virtual machine instance from either the web console or the CLI. 10.5.1. Monitoring live migration of a virtual machine instance in the web console For the duration of the migration, the virtual machine has a status of Migrating . This status is displayed on the VirtualMachines page or on the VirtualMachine details page of the migrating virtual machine. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. 10.5.2. Monitoring live migration of a virtual machine instance in the CLI The status of the virtual machine migration is stored in the Status component of the VirtualMachineInstance configuration. Procedure Use the oc describe command on the migrating virtual machine instance: USD oc describe vmi vmi-fedora Example output ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true 10.6. Cancelling the live migration of a virtual machine instance Cancel the live migration so that the virtual machine instance remains on the original node. You can cancel a live migration from either the web console or the CLI. 10.6.1. Cancelling live migration of a virtual machine instance in the web console You can cancel the live migration of a virtual machine instance in the web console. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Cancel Migration . 10.6.2. Cancelling live migration of a virtual machine instance in the CLI Cancel the live migration of a virtual machine instance by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job 10.7. Configuring virtual machine eviction strategy The LiveMigrate eviction strategy ensures that a virtual machine instance is not interrupted if the node is placed into maintenance or drained. Virtual machines instances with this eviction strategy will be live migrated to another node. 10.7.1. Configuring custom virtual machines with the LiveMigration eviction strategy You only need to configure the LiveMigration eviction strategy on custom virtual machines. Common templates have this eviction strategy configured by default. Procedure Add the evictionStrategy: LiveMigrate option to the spec.template.spec section in the virtual machine configuration file. This example uses oc edit to update the relevant snippet of the VirtualMachine configuration file: USD oc edit vm <custom-vm> -n <my-namespace> apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate ... Restart the virtual machine for the update to take effect: USD virtctl restart <custom-vm> -n <my-namespace> | [
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora",
"oc create -f vmi-migrate.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'",
"edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150",
"get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'",
"oc describe vmi vmi-fedora",
"Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true",
"oc delete vmim migration-job",
"oc edit vm <custom-vm> -n <my-namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate",
"virtctl restart <custom-vm> -n <my-namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/virtualization/live-migration |
Appendix C. Revision History | Appendix C. Revision History Revision History Revision 1-502 Mon Mar 08 2017 Jiri Herrmann Updates for the 6.9 GA release Revision 1-501 Mon May 02 2016 Jiri Herrmann Updates for the 6.8 GA release Revision 1-500 Thu Mar 01 2016 Jiri Herrmann Multiple updates for the 6.8 beta release Revision 1-449 Thu Oct 08 2015 Jiri Herrmann Cleaned up the Revision History Revision 1-447 Fri Jul 10 2015 Dayle Parker Updates for the 6.7 GA release. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/appe-virtualization_administration_guide-revision_history |
Chapter 15. Red Hat Developer Toolset Images | Chapter 15. Red Hat Developer Toolset Images Red Hat Developer Toolset is a Red Hat offering for developers on the Red Hat Enterprise Linux platform. It provides a complete set of development and performance analysis tools that can be installed and used on multiple versions of Red Hat Enterprise Linux. Executables built with the Red Hat Developer Toolset toolchain can then also be deployed and run on multiple versions of Red Hat Enterprise Linux. For detailed compatibility information, see Red Hat Developer Toolset 12 User Guide . Important Only container images providing the latest version of Red Hat Developer Toolset are supported. 15.1. Running Red Hat Developer Toolset Tools from Pre-Built Container Images To display general usage information for pre-built Red Hat Developer Toolset container images that you have already pulled to your local machine, run the following command as root : To launch an interactive shell within a pre-built container image, run the following command as root : In both of the above commands, substitute the image_name parameter with the name of the container image you pulled to your local system and now want to use. For example, to launch an interactive shell within the container image with selected toolchain components, run the following command as root : Example 15.1. Using GCC in the Pre-Built Red Hat Developer Toolset Toolchain Image This example illustrates how to obtain and launch the pre-built container image with selected toolchain components of the Red Hat Developer Toolset and how to run the gcc compiler within that image. Make sure you have a container environment set up properly on your system by following instructions at Using podman to work with containers in the Managing Containers document. Pull the pre-built toolchain Red Hat Developer Toolset container image from the official Red Hat Container Registry: To launch the container image with an interactive shell, issue the following command: To launch the container as a regular (non-root) user, use the sudo command. To map a directory from the host system to the container file system, include the -v (or --volume ) option in the podman command: In the above command, the host's ~/Source/ directory is mounted as the /src/ directory within the container. Once you are in the container's interactive shell, you can run Red Hat Developer Toolset tools as expected. For example, to verify the version of the gcc compiler, run: Additional Resources For more information about components available in Red Hat Developer Toolset, see the following online resources: Red Hat Developer Toolset 12 User Guide Red Hat Developer Toolset 12.1 Release Notes Red Hat Developer Toolset 12.0 Release Notes 15.2. Red Hat Developer Toolset Toolchain Container Image 15.2.1. Description The Red Hat Developer Toolset Toolchain image provides the GNU Compiler Collection (GCC) and GNU Debugger (GDB). The rhscl/devtoolset-12-toolchain-rhel7 image contains content corresponding to the following packages: Component Version Package gcc 12.2.1 devtoolset-12-gcc g++ devtoolset-12-gcc-c++ gfortran devtoolset-12-gcc-gfortran gdb 11.2 devtoolset-12-gdb Additionally, the devtoolset-12-binutils package is included as a dependency. 15.2.2. Access To pull the rhscl/devtoolset-12-toolchain-rhel7 image, run the following command as root : 15.3. Red Hat Developer Toolset Performance Tools Container Image 15.3.1. Description The Red Hat Developer Toolset Performance Tools image provides a number of profiling and performance measurement tools. The rhscl/devtoolset-12-perftools-rhel7 image includes the following components: Component Version Package dwz 0.14 devtoolset-12-dwz Dyninst 12.1.0 devtoolset-12-dyninst elfutils 0.187 devtoolset-12-elfutils ltrace 0.7.91 devtoolset-12-ltrace make 4.3 devtoolset-12-make memstomp 0.1.5 devtoolset-12-memstomp OProfile 1.4.0 devtoolset-12-oprofile strace 5.18 devtoolset-12-strace SystemTap 4.7 devtoolset-12-systemtap Valgrind 3.19.0 devtoolset-12-valgrind Additionally, the devtoolset-12-gcc and devtoolset-12-binutils packages are included as a dependency. 15.3.2. Access To pull the rhscl/devtoolset-12-perftools-rhel7 image, run the following command as root : 15.3.3. Usage Using the SystemTap Tool from Container Images When using the SystemTap tool from a container image, additional configuration is required, and the container needs to be run with special command-line options. The following three conditions need to be met: The image needs to be run with super-user privileges. To do this, run the image using the following command: To use the pre-built perftools image, substitute the image name for devtoolset-12-perftools-rhel7 in the above command. The following kernel packages need to be installed in the container: kernel kernel-devel kernel-debuginfo The version and release numbers of the above packages must match the version and release numbers of the kernel running on the host system. Run the following command to determine the version and release numbers of the hosts system's kernel: Note that the kernel-debuginfo package is only available from the Debug repository. Enable the rhel-7-server-debug-rpms repository. For more information on how to get access to debuginfo packages, see How can I download or install debuginfo packages for RHEL systems? . To install the required packages with the correct version, use the yum package manager and the output of the uname command. For example, to install the correct version of the kernel package, run the following command as root : Save the container to a reusable image by executing the podman commit command. To save a custom-built SystemTap container: | [
"podman run image_name usage",
"podman run -ti image_name /bin/bash -l",
"podman run -ti rhscl/devtoolset-12-toolchain-rhel7 /bin/bash -l",
"podman pull rhscl/devtoolset-12-toolchain-rhel7",
"podman run -ti rhscl/devtoolset-12-toolchain-rhel7 /bin/bash -l",
"sudo podman run -v ~/Source:/src -ti rhscl/devtoolset-12-toolchain-rhel7 /bin/bash -l",
"bash-4.2USD gcc -v [...] gcc version 12.2.1 20221121 (Red Hat 12.2.1-4) (GCC)",
"podman pull registry.redhat.io/rhscl/devtoolset-12-toolchain-rhel7",
"podman pull registry.redhat.io/rhscl/devtoolset-12-perftools-rhel7",
"~]USD podman run --ti --privileged --ipc=host --net=host --pid=host devtoolset-12-my-perftools /bin/bash -l",
"~]USD uname -r 3.10.0-1160.90.1.el7.x86_64",
"~]# yum install -y kernel-USD(uname -r)",
"~]USD podman commit devtoolset-12-systemtap-USD(uname -r)"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/devtoolset-images |
Part I. Set Up a Cache Manager | Part I. Set Up a Cache Manager | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-set_up_a_cache_manager |
Chapter 11. Disabling Windows container workloads | Chapter 11. Disabling Windows container workloads You can disable the capability to run Windows container workloads by uninstalling the Windows Machine Config Operator (WMCO) and deleting the namespace that was added by default when you installed the WMCO. 11.1. Uninstalling the Windows Machine Config Operator You can uninstall the Windows Machine Config Operator (WMCO) from your cluster. Prerequisites Delete the Windows Machine objects hosting your Windows workloads. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat Windows Machine Config Operator . Click the Red Hat Windows Machine Config Operator tile. The Operator tile indicates it is installed. In the Windows Machine Config Operator descriptor page, click Uninstall . 11.2. Deleting the Windows Machine Config Operator namespace You can delete the namespace that was generated for the Windows Machine Config Operator (WMCO) by default. Prerequisites The WMCO is removed from your cluster. Procedure Remove all Windows workloads that were created in the openshift-windows-machine-config-operator namespace: USD oc delete --all pods --namespace=openshift-windows-machine-config-operator Verify that all pods in the openshift-windows-machine-config-operator namespace are deleted or are reporting a terminating state: USD oc get pods --namespace openshift-windows-machine-config-operator Delete the openshift-windows-machine-config-operator namespace: USD oc delete namespace openshift-windows-machine-config-operator Additional resources Deleting Operators from a cluster Removing Windows nodes | [
"oc delete --all pods --namespace=openshift-windows-machine-config-operator",
"oc get pods --namespace openshift-windows-machine-config-operator",
"oc delete namespace openshift-windows-machine-config-operator"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/windows_container_support_for_openshift/disabling-windows-container-workloads |
Chapter 10. Dashboard (horizon) Parameters | Chapter 10. Dashboard (horizon) Parameters You can modify the horizon service with dashboard parameters. Parameter Description HorizonAllowedHosts A list of IP/Hostname for the server OpenStack Dashboard (horizon) is running on. Used for header checks. The default value is * . HorizonCustomizationModule OpenStack Dashboard (horizon) has a global overrides mechanism available to perform customizations. HorizonHelpURL On top of dashboard there is a Help button. This button could be used to re-direct user to vendor documentation or dedicated help portal. The default value is https://access.redhat.com/documentation/en-us/red_hat_openstack_platform . HorizonPasswordValidator Regex for password validation. HorizonPasswordValidatorHelp Help text for password validation. HorizonSecret Secret key for the webserver. HorizonSecureCookies Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in OpenStack Dashboard (horizon). The default value is False . HorizonVhostExtraParams Extra parameters for OpenStack Dashboard (horizon) vhost configuration. The default value is {'add_listen': True, 'priority': 10, 'access_log_format': '%a %l %u %t \\"%r\\" %>s %b \\"%%{}{Referer}i\\" \\"%%{}{User-Agent}i\\"', 'options': ['FollowSymLinks', 'MultiViews']} . MemcachedIPv6 Enable IPv6 features in Memcached. The default value is False . TimeZone The timezone to be set on the overcloud. The default value is UTC . WebSSOChoices Specifies the list of SSO authentication choices to present. Each item is a list of an SSO choice identifier and a display message. The default value is [['OIDC', 'OpenID Connect']] . WebSSOEnable Enable support for Web Single Sign-On. The default value is False . WebSSOIDPMapping Specifies a mapping from SSO authentication choice to identity provider and protocol. The identity provider and protocol names must match the resources defined in keystone. The default value is {'OIDC': ['myidp', 'openid']} . WebSSOInitialChoice The initial authentication choice to select by default. The default value is OIDC . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/overcloud_parameters/ref_dashboard-horizon-parameters_overcloud_parameters |
Chapter 19. Configuring the IO Subsystem | Chapter 19. Configuring the IO Subsystem 19.1. IO Subsystem Overview The io subsystem defines the XNIO workers and buffer pools used by other subsystems, such as Undertow and Remoting. These workers and buffer pools are defined within the following components in the io subsystem: Default IO Subsystem Configuration <subsystem xmlns="urn:jboss:domain:io:3.0"> <worker name="default"/> <buffer-pool name="default"/> </subsystem> 19.2. Configuring a Worker Workers are XNIO worker instances. An XNIO worker instance is an abstraction layer for the Java NIO APIs, which provide functionality such as management of IO and worker threads as well as SSL support. By default, JBoss EAP provides single worker called default , but more can be defined. Updating an Existing Worker To update an existing worker: Creating a New Worker To create a new worker: Deleting a Worker To delete a worker: For a full list of the attributes available for configuring workers, please see the IO Subsystem Attributes section. 19.3. Configuring a Buffer Pool Note IO buffer pools are deprecated, but they are still set as the default in the current release. Buffer pools are pooled NIO buffer instances. Changing the buffer size has a big impact on application performance. For most servers, the ideal buffer size is usually 16k. For more information about configuring Undertow byte buffer pools, see the Configuring Byte Buffer Pools section of the Configuration Guide for JBoss EAP. Updating an Existing Buffer Pool To update an existing buffer pool: Creating a Buffer Pool To create a new buffer pool: Deleting a Buffer Pool To delete a buffer pool: For a full list of the attributes available for configuring buffer pools, please see the IO Subsystem Attributes section. 19.4. Tuning the IO Subsystem For tips on monitoring and optimizing performance for the io subsystem, see the IO Subsystem Tuning section of the Performance Tuning Guide . | [
"<subsystem xmlns=\"urn:jboss:domain:io:3.0\"> <worker name=\"default\"/> <buffer-pool name=\"default\"/> </subsystem>",
"/subsystem=io/worker=default:write-attribute(name=io-threads,value=10)",
"reload",
"/subsystem=io/worker=newWorker:add",
"/subsystem=io/worker=newWorker:remove",
"reload",
"/subsystem=io/buffer-pool=default:write-attribute(name=direct-buffers,value=true)",
"reload",
"/subsystem=io/buffer-pool=newBuffer:add",
"/subsystem=io/buffer-pool=newBuffer:remove",
"reload"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/configuring_the_io_subsystem |
Chapter 90. LDAP | Chapter 90. LDAP Since Camel 1.5 Only producer is supported The LDAP component allows you to perform searches in LDAP servers using filters as the message payload. This component uses standard JNDI ( javax.naming package) to access the server. 90.1. Dependencies When using ldap with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ldap-starter</artifactId> </dependency> 90.2. URI format The ldapServerBean in the URI refers to a DirContext bean in the registry. The LDAP component only supports producer endpoints, which means that an ldap URI cannot appear in the from at the start of a route. 90.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 90.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 90.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 90.4. Component Options The LDAP component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 90.5. Endpoint Options The LDAP endpoint is configured using URI syntax: with the following path and query parameters: 90.5.1. Path Parameters (1 parameters) Name Description Default Type dirContextName (producer) Required Name of either a javax.naming.directory.DirContext, or java.util.Hashtable, or Map bean to lookup in the registry. If the bean is either a Hashtable or Map then a new javax.naming.directory.DirContext instance is created for each use. If the bean is a javax.naming.directory.DirContext then the bean is used as given. The latter may not be possible in all situations where the javax.naming.directory.DirContext must not be shared, and in those situations it can be better to use java.util.Hashtable or Map instead. String 90.5.2. Query Parameters (5 parameters) Name Description Default Type base (producer) The base DN for searches. ou=system String pageSize (producer) When specified the ldap module uses paging to retrieve all results (most LDAP Servers throw an exception when trying to retrieve more than 1000 entries in one query). To be able to use this a LdapContext (subclass of DirContext) has to be passed in as ldapServerBean (otherwise an exception is thrown). Integer returnedAttributes (producer) Comma-separated list of attributes that should be set in each entry of the result. String scope (producer) Specifies how deeply to search the tree of entries, starting at the base DN. Enum values: object onelevel subtree subtree String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 90.6. Result The result is returned to Out body as a List<javax.naming.directory.SearchResult> object. 90.7. DirContext The URI, ldap:ldapserver , references a Spring bean with the ID, ldapserver . The ldapserver bean may be defined as follows: <bean id="ldapserver" class="javax.naming.directory.InitialDirContext" scope="prototype"> <constructor-arg> <props> <prop key="java.naming.factory.initial">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key="java.naming.provider.url">ldap://localhost:10389</prop> <prop key="java.naming.security.authentication">none</prop> </props> </constructor-arg> </bean> The preceding example declares a regular Sun based LDAP DirContext that connects anonymously to a locally hosted LDAP server. Note DirContext objects are not required to support concurrency by contract. It is therefore important that the directory context is declared with the setting, scope="prototype" , in the bean definition or that the context supports concurrency. In the Spring framework, prototype scoped objects are instantiated each time they are looked up. 90.8. Security concerns related to LDAP injection Note The camel-ldap component uses the message body as filter the search results. Therefore, the message body should be protected from LDAP injection. To assist with this, you can use org.apache.camel.component.ldap.LdapHelper utility class that has method(s) to escape string values to be LDAP injection safe. See LDAP Injection for more information. 90.9. Samples Following on from the Spring configuration above, the code sample below sends an LDAP request to filter search a group for a member. The Common Name is then extracted from the response. ProducerTemplate template = exchange.getContext().createProducerTemplate(); Collection<SearchResult> results = template.requestBody( "ldap:ldapserver?base=ou=mygroup,ou=groups,ou=system", "(member=uid=huntc,ou=users,ou=system)", Collection.class); if (results.size() > 0) { // Extract what we need from the device's profile Iterator resultIter = results.iterator(); SearchResult searchResult = (SearchResult) resultIter.(); Attributes attributes = searchResult.getAttributes(); Attribute deviceCNAttr = attributes.get("cn"); String deviceCN = (String) deviceCNAttr.get(); // ... } If no specific filter is required - for example, you just need to look up a single entry - specify a wildcard filter expression. For example, if the LDAP entry has a Common Name, use a filter expression like: 90.9.1. Binding using credentials A Camel end user donated this sample code he used to bind to the ldap server using credentials. Properties props = new Properties(); props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); props.setProperty(Context.PROVIDER_URL, "ldap://localhost:389"); props.setProperty(Context.URL_PKG_PREFIXES, "com.sun.jndi.url"); props.setProperty(Context.REFERRAL, "ignore"); props.setProperty(Context.SECURITY_AUTHENTICATION, "simple"); props.setProperty(Context.SECURITY_PRINCIPAL, "cn=Manager"); props.setProperty(Context.SECURITY_CREDENTIALS, "secret"); DefaultRegistry reg = new DefaultRegistry(); reg.bind("myldap", new InitialLdapContext(props, null)); CamelContext context = new DefaultCamelContext(reg); context.addRoutes( new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("ldap:myldap?base=ou=test"); } } ); context.start(); ProducerTemplate template = context.createProducerTemplate(); Endpoint endpoint = context.getEndpoint("direct:start"); Exchange exchange = endpoint.createExchange(); exchange.getIn().setBody("(uid=test)"); Exchange out = template.send(endpoint, exchange); Collection<SearchResult> data = out.getMessage().getBody(Collection.class); assert data != null; assert !data.isEmpty(); System.out.println(out.getMessage().getBody()); context.stop(); 90.10. Configuring SSL All that is required is to create a custom socket factory and reference it in the InitialDirContext bean - see below sample. SSL Configuration <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <sslContextParameters xmlns="http://camel.apache.org/schema/spring" id="sslContextParameters" > <keyManagers keyPassword="{{keystore.pwd}}"> <keyStore resource="{{keystore.url}}" password="{{keystore.pwd}}"/> </keyManagers> </sslContextParameters> <bean id="customSocketFactory" class="com.example.ldap.CustomSocketFactory"> <constructor-arg index="0" ref="sslContextParameters"/> </bean> <bean id="ldapserver" class="javax.naming.directory.InitialDirContext" scope="prototype"> <constructor-arg> <props> <prop key="java.naming.factory.initial">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key="java.naming.provider.url">ldaps://127.0.0.1:10636</prop> <prop key="java.naming.security.protocol">ssl</prop> <prop key="java.naming.security.authentication">none</prop> <prop key="java.naming.ldap.factory.socket">com.example.ldap.CustomSocketFactory</prop> </props> </constructor-arg> </bean> </beans> Custom Socket Factory package com.example.ldap; import java.io.IOException; import java.net.InetAddress; import java.net.Socket; import java.security.KeyStore; import javax.net.SocketFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import org.apache.camel.support.jsse.SSLContextParameters; /** * The CustomSocketFactory. Loads the KeyStore and creates an instance of SSLSocketFactory */ public class CustomSocketFactory extends SSLSocketFactory { private static SSLSocketFactory socketFactory; /** * Called by the getDefault() method. */ public CustomSocketFactory() { } /** * Called by Spring Boot DI to initialize an instance of SocketFactory */ public CustomSocketFactory(SSLContextParameters sslContextParameters) { try { KeyStore keyStore = sslContextParameters.getKeyManagers().getKeyStore().createKeyStore(); TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509"); tmf.init(keyStore); SSLContext ctx = SSLContext.getInstance("TLS"); ctx.init(null, tmf.getTrustManagers(), null); socketFactory = ctx.getSocketFactory(); } catch (Exception ex) { ex.printStackTrace(System.err); } } /** * Getter for the SocketFactory */ public static SocketFactory getDefault() { return new CustomSocketFactory(); } @Override public String[] getDefaultCipherSuites() { return socketFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return socketFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket socket, String string, int i, boolean bln) throws IOException { return socketFactory.createSocket(socket, string, i, bln); } @Override public Socket createSocket(String string, int i) throws IOException { return socketFactory.createSocket(string, i); } @Override public Socket createSocket(String string, int i, InetAddress ia, int i1) throws IOException { return socketFactory.createSocket(string, i, ia, i1); } @Override public Socket createSocket(InetAddress ia, int i) throws IOException { return socketFactory.createSocket(ia, i); } @Override public Socket createSocket(InetAddress ia, int i, InetAddress ia1, int i1) throws IOException { return socketFactory.createSocket(ia, i, ia1, i1); } } 90.11. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.ldap.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ldap.enabled Whether to enable auto configuration of the ldap component. This is enabled by default. Boolean camel.component.ldap.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ldap-starter</artifactId> </dependency>",
"ldap:ldapServerBean[?options]",
"ldap:dirContextName",
"<bean id=\"ldapserver\" class=\"javax.naming.directory.InitialDirContext\" scope=\"prototype\"> <constructor-arg> <props> <prop key=\"java.naming.factory.initial\">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key=\"java.naming.provider.url\">ldap://localhost:10389</prop> <prop key=\"java.naming.security.authentication\">none</prop> </props> </constructor-arg> </bean>",
"ProducerTemplate template = exchange.getContext().createProducerTemplate(); Collection<SearchResult> results = template.requestBody( \"ldap:ldapserver?base=ou=mygroup,ou=groups,ou=system\", \"(member=uid=huntc,ou=users,ou=system)\", Collection.class); if (results.size() > 0) { // Extract what we need from the device's profile Iterator resultIter = results.iterator(); SearchResult searchResult = (SearchResult) resultIter.next(); Attributes attributes = searchResult.getAttributes(); Attribute deviceCNAttr = attributes.get(\"cn\"); String deviceCN = (String) deviceCNAttr.get(); // }",
"(cn=*)",
"Properties props = new Properties(); props.setProperty(Context.INITIAL_CONTEXT_FACTORY, \"com.sun.jndi.ldap.LdapCtxFactory\"); props.setProperty(Context.PROVIDER_URL, \"ldap://localhost:389\"); props.setProperty(Context.URL_PKG_PREFIXES, \"com.sun.jndi.url\"); props.setProperty(Context.REFERRAL, \"ignore\"); props.setProperty(Context.SECURITY_AUTHENTICATION, \"simple\"); props.setProperty(Context.SECURITY_PRINCIPAL, \"cn=Manager\"); props.setProperty(Context.SECURITY_CREDENTIALS, \"secret\"); DefaultRegistry reg = new DefaultRegistry(); reg.bind(\"myldap\", new InitialLdapContext(props, null)); CamelContext context = new DefaultCamelContext(reg); context.addRoutes( new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"ldap:myldap?base=ou=test\"); } } ); context.start(); ProducerTemplate template = context.createProducerTemplate(); Endpoint endpoint = context.getEndpoint(\"direct:start\"); Exchange exchange = endpoint.createExchange(); exchange.getIn().setBody(\"(uid=test)\"); Exchange out = template.send(endpoint, exchange); Collection<SearchResult> data = out.getMessage().getBody(Collection.class); assert data != null; assert !data.isEmpty(); System.out.println(out.getMessage().getBody()); context.stop();",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:context=\"http://www.springframework.org/schema/context\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <sslContextParameters xmlns=\"http://camel.apache.org/schema/spring\" id=\"sslContextParameters\" > <keyManagers keyPassword=\"{{keystore.pwd}}\"> <keyStore resource=\"{{keystore.url}}\" password=\"{{keystore.pwd}}\"/> </keyManagers> </sslContextParameters> <bean id=\"customSocketFactory\" class=\"com.example.ldap.CustomSocketFactory\"> <constructor-arg index=\"0\" ref=\"sslContextParameters\"/> </bean> <bean id=\"ldapserver\" class=\"javax.naming.directory.InitialDirContext\" scope=\"prototype\"> <constructor-arg> <props> <prop key=\"java.naming.factory.initial\">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key=\"java.naming.provider.url\">ldaps://127.0.0.1:10636</prop> <prop key=\"java.naming.security.protocol\">ssl</prop> <prop key=\"java.naming.security.authentication\">none</prop> <prop key=\"java.naming.ldap.factory.socket\">com.example.ldap.CustomSocketFactory</prop> </props> </constructor-arg> </bean> </beans>",
"package com.example.ldap; import java.io.IOException; import java.net.InetAddress; import java.net.Socket; import java.security.KeyStore; import javax.net.SocketFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import org.apache.camel.support.jsse.SSLContextParameters; /** * The CustomSocketFactory. Loads the KeyStore and creates an instance of SSLSocketFactory */ public class CustomSocketFactory extends SSLSocketFactory { private static SSLSocketFactory socketFactory; /** * Called by the getDefault() method. */ public CustomSocketFactory() { } /** * Called by Spring Boot DI to initialize an instance of SocketFactory */ public CustomSocketFactory(SSLContextParameters sslContextParameters) { try { KeyStore keyStore = sslContextParameters.getKeyManagers().getKeyStore().createKeyStore(); TrustManagerFactory tmf = TrustManagerFactory.getInstance(\"SunX509\"); tmf.init(keyStore); SSLContext ctx = SSLContext.getInstance(\"TLS\"); ctx.init(null, tmf.getTrustManagers(), null); socketFactory = ctx.getSocketFactory(); } catch (Exception ex) { ex.printStackTrace(System.err); } } /** * Getter for the SocketFactory */ public static SocketFactory getDefault() { return new CustomSocketFactory(); } @Override public String[] getDefaultCipherSuites() { return socketFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return socketFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket socket, String string, int i, boolean bln) throws IOException { return socketFactory.createSocket(socket, string, i, bln); } @Override public Socket createSocket(String string, int i) throws IOException { return socketFactory.createSocket(string, i); } @Override public Socket createSocket(String string, int i, InetAddress ia, int i1) throws IOException { return socketFactory.createSocket(string, i, ia, i1); } @Override public Socket createSocket(InetAddress ia, int i) throws IOException { return socketFactory.createSocket(ia, i); } @Override public Socket createSocket(InetAddress ia, int i, InetAddress ia1, int i1) throws IOException { return socketFactory.createSocket(ia, i, ia1, i1); } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-ldap-component-starter |
Chapter 106. Quartz | Chapter 106. Quartz Only consumer is supported The Quartz component provides a scheduled delivery of messages using the Quartz Scheduler 2.x . Each endpoint represents a different timer (in Quartz terms, a Trigger and JobDetail). 106.1. Dependencies When using quartz with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-quartz-starter</artifactId> </dependency> 106.2. URI format The component uses either a CronTrigger or a SimpleTrigger . If no cron expression is provided, the component uses a simple trigger. If no groupName is provided, the quartz component uses the Camel group name. 106.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 106.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 106.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 106.4. Component Options The Quartz component supports 13 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean enableJmx (consumer) Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true. true boolean prefixInstanceName (consumer) Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext's. true boolean prefixJobNameWithEndpointId (consumer) Whether to prefix the quartz job with the endpoint id. This option is default false. false boolean properties (consumer) Properties to configure the Quartz scheduler. Map propertiesFile (consumer) File name of the properties to load from the classpath. String propertiesRef (consumer) References to an existing Properties or Map to lookup in the registry to use for configuring quartz. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean scheduler (advanced) To use the custom configured Quartz scheduler, instead of creating a new Scheduler. Scheduler schedulerFactory (advanced) To use the custom SchedulerFactory which is used to create the Scheduler. SchedulerFactory autoStartScheduler (scheduler) Whether or not the scheduler should be auto started. This options is default true. true boolean interruptJobsOnShutdown (scheduler) Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully. false boolean startDelayedSeconds (scheduler) Seconds to wait before starting the quartz scheduler. int 106.5. Endpoint Options The Quartz endpoint is configured using URI syntax: with the following path and query parameters: 106.5.1. Path Parameters (2 parameters) Name Description Default Type groupName (consumer) The quartz group name to use. The combination of group name and trigger name should be unique. Camel String triggerName (consumer) Required The quartz trigger name to use. The combination of group name and trigger name should be unique. String 106.5.2. Query Parameters (17 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean cron (consumer) Specifies a cron expression to define when to trigger. String deleteJob (consumer) If set to true, then the trigger automatically delete when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true. true boolean durableJob (consumer) Whether or not the job should remain stored after it is orphaned (no triggers point to it). false boolean pauseJob (consumer) If set to true, then the trigger automatically pauses when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true. false boolean recoverableJob (consumer) Instructs the scheduler whether or not the job should be re-executed if a 'recovery' or 'fail-over' situation is encountered. false boolean stateful (consumer) Uses a Quartz PersistJobDataAfterExecution and DisallowConcurrentExecution instead of the default job. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern customCalendar (advanced) Specifies a custom calendar to avoid specific range of date. Calendar jobParameters (advanced) To configure additional options on the job. Map prefixJobNameWithEndpointId (advanced) Whether the job name should be prefixed with endpoint id. false boolean triggerParameters (advanced) To configure additional options on the trigger. Map usingFixedCamelContextName (advanced) If it is true, JobDataMap uses the CamelContext name directly to reference the CamelContext, if it is false, JobDataMap uses use the CamelContext management name which could be changed during the deploy time. false boolean autoStartScheduler (scheduler) Whether or not the scheduler should be auto started. true boolean startDelayedSeconds (scheduler) Seconds to wait before starting the quartz scheduler. int triggerStartDelay (scheduler) In case of scheduler has already started, we want the trigger start slightly after current time to ensure endpoint is fully started before the job kicks in. Negative value shifts trigger start time in the past. 500 long 106.5.3. Configuring quartz.properties file By default Quartz will look for a quartz.properties file in the org/quartz directory of the classpath. If you are using WAR deployments this means just drop the quartz.properties in WEB-INF/classes/org/quartz . However the Camel Quartz component also allows you to configure properties: Parameter Default Type Description properties null Properties You can configure a java.util.Properties instance. propertiesFile null String File name of the properties to load from the classpath To do this you can configure this in Spring XML as follows <bean id="quartz" class="org.apache.camel.component.quartz.QuartzComponent"> <property name="propertiesFile" value="com/mycompany/myquartz.properties"/> </bean> 106.6. Enabling Quartz scheduler in JMX You need to configure the quartz scheduler properties to enable JMX. That is typically setting the option "org.quartz.scheduler.jmx.export" to a true value in the configuration file. This option is set to true by default, unless explicitly disabled. 106.7. Starting the Quartz scheduler The Quartz component offers an option to let the Quartz scheduler be started delayed, or not auto started at all. This is an example: <bean id="quartz" class="org.apache.camel.component.quartz.QuartzComponent"> <property name="startDelayedSeconds" value="5"/> </bean> 106.8. Clustering If you use Quartz in clustered mode, e.g. the JobStore is clustered. Then the Quartz component will not pause/remove triggers when a node is being stopped/shutdown. This allows the trigger to keep running on the other nodes in the cluster. Note When running in clustered node no checking is done to ensure unique job name/group for endpoints. 106.9. Message Headers Camel adds the getters from the Quartz Execution Context as header values. The following headers are added: calendar , fireTime , jobDetail , jobInstance , jobRuntTime , mergedJobDataMap , nextFireTime , previousFireTime , refireCount , result , scheduledFireTime , scheduler , trigger , triggerName , triggerGroup . The fireTime header contains the java.util.Date of when the exchange was fired. 106.10. Using Cron Triggers Quartz supports Cron-like expressions for specifying timers in a handy format. You can use these expressions in the cron URI parameter; though to preserve valid URI encoding we allow + to be used instead of spaces. For example, the following will fire a message every five minutes starting at 12pm (noon) to 6pm on weekdays: from("quartz://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI") .to("activemq:Totally.Rocks"); which is equivalent to using the cron expression The following table shows the URI character encodings we use to preserve valid URI syntax: URI Character Cron character + Space 106.11. Specifying time zone The Quartz Scheduler allows you to configure time zone per trigger. For example to use a timezone of your country, then you can do as follows: The timeZone value is the values accepted by java.util.TimeZone . 106.12. Configuring misfire instructions The quartz scheduler can be configured with a misfire instruction to handle misfire situations for the trigger. The concrete trigger type that you are using will have defined a set of additional MISFIRE_INSTRUCTION_XXX constants that may be set as this property's value. For example to configure the simple trigger to use misfire instruction 4: And likewise you can configure the cron trigger with one of its misfire instructions as well: The simple and cron triggers has the following misfire instructions representative: 106.12.1. SimpleTrigger.MISFIRE_INSTRUCTION_FIRE_NOW = 1 (default) Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be fired now by Scheduler. This instruction should typically only be used for 'one-shot' (non-repeating) Triggers. If it is used on a trigger with a repeat count > 0 then it is equivalent to the instruction MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT. 106.12.2. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_EXISTING_REPEAT_COUNT = 2 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to 'now' (even if the associated Calendar excludes 'now') with the repeat count left as-is. This does obey the Trigger end-time however, so if 'now' is after the end-time the Trigger will not fire again. Use of this instruction causes the trigger to 'forget' the start-time and repeat-count that it was originally setup with (this is only an issue if you for some reason wanted to be able to tell what the original values were at some later time). 106.12.3. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT = 3 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to 'now' (even if the associated Calendar excludes 'now') with the repeat count set to what it would be, if it had not missed any firings. This does obey the Trigger end-time however, so if 'now' is after the end-time the Trigger will not fire again. Use of this instruction causes the trigger to 'forget' the start-time and repeat-count that it was originally setup with. Instead, the repeat count on the trigger will be changed to whatever the remaining repeat count is (this is only an issue if you for some reason wanted to be able to tell what the original values were at some later time). This instruction could cause the Trigger to go to the 'COMPLETE' state after firing 'now', if all the repeat-fire-times where missed. 106.12.4. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NEXT_WITH_REMAINING_COUNT = 4 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to the scheduled time after 'now' - taking into account any associated Calendar and with the repeat count set to what it would be, if it had not missed any firings. Note This instruction could cause the Trigger to go directly to the 'COMPLETE' state if all fire-times where missed. 106.12.5. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NEXT_WITH_EXISTING_COUNT = 5 Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to the scheduled time after 'now' - taking into account any associated Calendar, and with the repeat count left unchanged. Note This instruction could cause the Trigger to go directly to the 'COMPLETE' state if the end-time of the trigger has arrived. 106.12.6. CronTrigger.MISFIRE_INSTRUCTION_FIRE_ONCE_NOW = 1 (default) Instructs the Scheduler that upon a mis-fire situation, the CronTrigger wants to be fired now by Scheduler. 106.12.7. CronTrigger.MISFIRE_INSTRUCTION_DO_NOTHING = 2 Instructs the Scheduler that upon a mis-fire situation, the CronTrigger wants to have it's -fire-time updated to the time in the schedule after the current time (taking into account any associated Calendar but it does not want to be fired now. 106.13. Using QuartzScheduledPollConsumerScheduler The Quartz component provides a Polling Consumer scheduler which allows to use cron based scheduling for Polling Consumer such as the File and FTP consumers. For example to use a cron based expression to poll for files every 2nd second, then a Camel route can be define simply as: from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?") .to("bean:process"); Notice we define the scheduler=quartz to instruct Camel to use the Quartz based scheduler. Then we use scheduler.xxx options to configure the scheduler. The Quartz scheduler requires the cron option to be set. The following options is supported: Parameter Default Type Description quartzScheduler null org.quartz.Scheduler To use a custom Quartz scheduler. If none configure then the shared scheduler from the component is used. cron null String Mandatory : To define the cron expression for triggering the polls. triggerId null String To specify the trigger id. If none provided then an UUID is generated and used. triggerGroup QuartzScheduledPollConsumerScheduler String To specify the trigger group. timeZone Default TimeZone The time zone to use for the CRON trigger. Important Remember configuring these options from the endpoint URIs must be prefixed with scheduler . For example to configure the trigger id and group: from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?&scheduler.triggerId=myId&scheduler.triggerGroup=myGroup") .to("bean:process"); There is also a CRON scheduler in Spring, so you can use the following as well: from("file:inbox?scheduler=spring&scheduler.cron=0/2+*+*+*+*+?") .to("bean:process"); 106.14. Cron Component Support The Quartz component can be used as implementation of the Camel Cron component. Maven users will need to add the following additional dependency to their pom.xml : <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cron</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> Users can then use the cron component instead of the quartz component, as in the following route: from("cron://name?schedule=0+0/5+12-18+?+*+MON-FRI") .to("activemq:Totally.Rocks"); 106.15. Spring Boot Auto-Configuration The component supports 14 options, which are listed below. Name Description Default Type camel.component.quartz.auto-start-scheduler Whether or not the scheduler should be auto started. This options is default true. true Boolean camel.component.quartz.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.quartz.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.quartz.enable-jmx Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true. true Boolean camel.component.quartz.enabled Whether to enable auto configuration of the quartz component. This is enabled by default. Boolean camel.component.quartz.interrupt-jobs-on-shutdown Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully. false Boolean camel.component.quartz.prefix-instance-name Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext's. true Boolean camel.component.quartz.prefix-job-name-with-endpoint-id Whether to prefix the quartz job with the endpoint id. This option is default false. false Boolean camel.component.quartz.properties Properties to configure the Quartz scheduler. Map camel.component.quartz.properties-file File name of the properties to load from the classpath. String camel.component.quartz.properties-ref References to an existing Properties or Map to lookup in the registry to use for configuring quartz. String camel.component.quartz.scheduler To use the custom configured Quartz scheduler, instead of creating a new Scheduler. The option is a org.quartz.Scheduler type. Scheduler camel.component.quartz.scheduler-factory To use the custom SchedulerFactory which is used to create the Scheduler. The option is a org.quartz.SchedulerFactory type. SchedulerFactory camel.component.quartz.start-delayed-seconds Seconds to wait before starting the quartz scheduler. Integer | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-quartz-starter</artifactId> </dependency>",
"quartz://timerName?options quartz://groupName/timerName?options quartz://groupName/timerName?cron=expression quartz://timerName?cron=expression",
"quartz:groupName/triggerName",
"<bean id=\"quartz\" class=\"org.apache.camel.component.quartz.QuartzComponent\"> <property name=\"propertiesFile\" value=\"com/mycompany/myquartz.properties\"/> </bean>",
"<bean id=\"quartz\" class=\"org.apache.camel.component.quartz.QuartzComponent\"> <property name=\"startDelayedSeconds\" value=\"5\"/> </bean>",
"from(\"quartz://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI\") .to(\"activemq:Totally.Rocks\");",
"0 0/5 12-18 ? * MON-FRI",
"quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.timeZone=Europe/Stockholm",
"quartz://myGroup/myTimerName?trigger.repeatInterval=2000&trigger.misfireInstruction=4",
"quartz://myGroup/myTimerName?cron=0/2+*+*+*+*+?&trigger.misfireInstruction=2",
"from(\"file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?\") .to(\"bean:process\");",
"from(\"file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?&scheduler.triggerId=myId&scheduler.triggerGroup=myGroup\") .to(\"bean:process\");",
"from(\"file:inbox?scheduler=spring&scheduler.cron=0/2+*+*+*+*+?\") .to(\"bean:process\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cron</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>",
"from(\"cron://name?schedule=0+0/5+12-18+?+*+MON-FRI\") .to(\"activemq:Totally.Rocks\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-quartz-component-starter |
Chapter 2. Installing the Red Hat JBoss Core Services 2.4.57 | Chapter 2. Installing the Red Hat JBoss Core Services 2.4.57 You can install the Apache HTTP Server 2.4.57 on Red Hat Enterprise Linux or Windows Server. For more information, see the following sections of the installation guide: Installing the JBCS Apache HTTP Server on RHEL from archive files Installing the JBCS Apache HTTP Server on RHEL from RPM packages Installing the JBCS Apache HTTP Server on Windows Server | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_2_release_notes/installing_the_red_hat_jboss_core_services_2_4_57 |
Chapter 1. Red Hat Cluster Suite Overview | Chapter 1. Red Hat Cluster Suite Overview Clustered systems provide reliability, scalability, and availability to critical production services. Using Red Hat Cluster Suite, you can create a cluster to suit your needs for performance, high availability, load balancing, scalability, file sharing, and economy. This chapter provides an overview of Red Hat Cluster Suite components and functions, and consists of the following sections: Section 1.1, "Cluster Basics" Section 1.2, "Red Hat Cluster Suite Introduction" Section 1.3, "Cluster Infrastructure" Section 1.4, "High-availability Service Management" Section 1.5, "Red Hat GFS" Section 1.6, "Cluster Logical Volume Manager" Section 1.7, "Global Network Block Device" Section 1.8, "Linux Virtual Server" Section 1.9, "Cluster Administration Tools" Section 1.10, "Linux Virtual Server Administration GUI" 1.1. Cluster Basics A cluster is two or more computers (called nodes or members ) that work together to perform a task. There are four major types of clusters: Storage High availability Load balancing High performance Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. Red Hat Cluster Suite provides storage clustering through Red Hat GFS. High-availability clusters provide continuous availability of services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high-availability cluster read and write data (via read-write mounted file systems). Therefore, a high-availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high-availability cluster are not visible from clients outside the cluster. (High-availability clusters are sometimes referred to as failover clusters.) Red Hat Cluster Suite provides high-availability clustering through its High-availability Service Management component. Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Node failures in a load-balancing cluster are not visible from clients outside the cluster. Red Hat Cluster Suite provides load-balancing through LVS (Linux Virtual Server). High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. (High performance clusters are also referred to as computational clusters or grid computing.) Note The cluster types summarized in the preceding text reflect basic configurations; your needs might require a combination of the clusters described. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/ch.gfscs.cluster-overview-CSO |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in three LTS versions: OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.9/pr01 |
Chapter 5. Migrating applications to Data Grid 8 | Chapter 5. Migrating applications to Data Grid 8 5.1. Marshalling in Data Grid 8 Marshalling capabilities are significantly refactored in Data Grid 8 to isolate internal objects and user objects. Because Data Grid now handles marshalling of internal classes, you no longer need to handle those internal classes when configuring marshallers with embedded or remote caches. 5.1.1. ProtoStream marshalling By default, Data Grid 8 uses the ProtoStream API to marshall data as Protocol Buffers, a language-neutral, backwards compatible format. Protobuf encoding is a schema-defined format that is now a default standard for many applications and allows greater flexibility when transcoding data in comparison with JBoss Marshalling, which was the default in Data Grid 7. Because the ProtoStream marshaller is based on the Protobuf format, Data Grid can convert to other encodings without first converting to a Java object. When using JBoss Marshalling, it is necessary to convert keys and values to Java objects before converting to any other format. As part of your migration to Data Grid 8, you should start using ProtoStream marshalling for your Java classes. From a high-level, to use the ProtoStream marshaller, you generate SerializationContextInitializer implementations with the ProtoStream processor. First, you add @Proto annotations to your Java classes and then use a ProtoStream processor that Data Grid provides to generate serialization contexts that contain: .proto schemas that provide a structured representation of your Java objects as Protobuf message types. Marshaller implementations to encode your Java objects to Protobuf format. Depending on whether you use embedded or remote caches, Data Grid can automatically register your SerializationContextInitializer implementations. Nested ProtoStream annotations Data Grid 8.2 upgrades to ProtoStream 4.4.0.Final, which requires migration in some cases. In versions, the ProtoStream API did not correctly nest message types with the result that the messages were generated as top-level only. If you have Protobuf-encoded entries in persistent cache stores, you should modify your Java classes so that ProtoStream annotations are at top-level. This ensures that the nesting in your persisted messages matches the nesting in your Java classes, otherwise data incompatibility issues can occur. For example, if you have nested Java classes such as the following: class OuterClass { class InnerClass { @ProtoField(1) int someMethod() { } } } You should adapt the classes so that InnerClass is no longer a child of OuterClass : class InnerClass { @ProtoField(1) int someMethod() { } } Marshalling with Data Grid Server You should use only Protobuf encoding for remote caches in combination with the ProtoStream marshaller for any custom types. Other marshaller implementations, such as JBoss marshalling, require you to use different cache encodings that are not compatible with the Data Grid CLI, Data Grid Console, or with Ickle queries. Cache stores and ProtoStream In Data Grid 7.x, data that you persist to a cache store is not compatible with the ProtoStream marshaller in Data Grid 8. You must use the StoreMigrator utility to migrate data from any Data Grid 7.x cache store to a Data Grid 8 cache store. 5.1.2. Alternative marshaller implementations Data Grid does provide alternative marshaller implementations to ProtoStream help ease migration from older versions. You should use those alternative marshallers only as an interim solution while you migrate to ProtoStream marshalling. Note For new projects Red Hat strongly recommends you use only ProtoStream marshalling to avoid any issues with future upgrades or migrations. Deserialization Allow List In keeping with Red Hat's commitment to using inclusive language the term "white list" has been changed to "allow list" for configuring serialization of your Java classes. Data Grid 8.1 <cache-container> <serialization> <white-list> <class>org.infinispan.test.data.Person</class> <regex>org.infinispan.test.data.*</regex> </white-list> </serialization> </cache-container> Data Grid 8.2 <cache-container> <serialization> <allow-list> <class>org.infinispan.test.data.Person</class> <regex>org.infinispan.test.data.*</regex> </allow-list> </serialization> </cache-container> JBoss marshalling In Data Grid 7, JBoss Marshalling is the default marshaller. In Data Grid 8, ProtoStream marshalling is the default. Note You should use JavaSerializationMarshaller instead of JBoss Marshalling if you have a client requirement to use Java serialization. If you must use JBoss Marshalling as a temporary solution during migration to Data Grid 8, do the following: Embedded caches Add the infinispan-jboss-marshalling dependency to your classpath. Configure Data Grid to use the JBossUserMarshaller , for example: <serialization marshaller="org.infinispan.jboss.marshalling.core.JBossUserMarshaller"/> Add your classes to the list of classes that Data Grid allows for deserialization. Remote caches Data Grid Server does not support JBoss Marshalling and the GenericJBossMarshaller is no longer automatically configured if the infinispan-jboss-marshalling module is on the classpath. You must configure Hot Rod Java clients to use JBoss Marshalling as follows: RemoteCacheManager .marshaller("org.infinispan.jboss.marshalling.commons.GenericJBossMarshaller"); hotrod-client.properties Additional resources Cache Encoding and Marshalling 5.2. Migrating applications to the AutoProtoSchemaBuilder annotation versions of Data Grid use the MessageMarshaller interface in the ProtoStream API to configure marshalling. Both the MessageMarshaller API and the ProtoSchemaBuilder annotation are deprecated as of Data Grid 8.1.1, which corresponds to ProtoStream 4.3.4. Using the MessageMarshaller interface involves either: Manually creating Protobuf schema. Adding the ProtoSchemaBuilder annotation to Java classes and then generating Protobuf schema. However, these techniques for configuring ProtoStream marshalling are not as efficient and reliable as the AutoProtoSchemaBuilder annotation, which is available starting with Data Grid 8.1.1. Simply add the AutoProtoSchemaBuilder annotation to your Java classes and to generate SerializationContextInitializer implementations that include Protobuf schema and associated marshallers. Red Hat recommends that you start using the AutoProtoSchemaBuilder annotation to get the best results from the ProtoStream marshaller. The following code examples demonstrate how you can migrate applications from the MessageMarshaller API to the AutoProtoSchemaBuilder annotation. 5.2.1. Basic MessageMarshaller implementation This example contains some fields that use non-default types. The text field has a different order and the fixed32 field conflicts with the generated Protobuf schema type because the code generator uses int type by default. SimpleEntry.java public class SimpleEntry { private String description; private Collection<String> text; private int intDefault; private Integer fixed32; // public Getter, Setter, equals and HashCode methods omitted for brevity } SimpleEntryMarshaller.java import org.infinispan.protostream.MessageMarshaller; public class SimpleEntryMarshaller implements MessageMarshaller<SimpleEntry> { @Override public void writeTo(ProtoStreamWriter writer, SimpleEntry testEntry) throws IOException { writer.writeString("description", testEntry.getDescription()); writer.writeInt("intDefault", testEntry.getIntDefault()); writer.writeInt("fix32", testEntry.getFixed32()); writer.writeCollection("text", testEntry.getText(), String.class); } @Override public SimpleEntry readFrom(MessageMarshaller.ProtoStreamReader reader) throws IOException { SimpleEntry x = new SimpleEntry(); x.setDescription(reader.readString("description")); x.setIntDefault(reader.readInt("intDefault")); x.setFixed32(reader.readInt("fix32")); x.setText(reader.readCollection("text", new LinkedList<String>(), String.class)); return x; } } Resulting Protobuf schema syntax = "proto2"; package example; message SimpleEntry { required string description = 1; optional int32 intDefault = 2; optional fixed32 fix32 = 3; repeated string text = 4; } Migrated to the AutoProtoSchemaBuilder annotation SimpleEntry.java import org.infinispan.protostream.annotations.ProtoField; import org.infinispan.protostream.descriptors.Type; public class SimpleEntry { private String description; private Collection<String> text; private int intDefault; private Integer fixed32; @ProtoField(number = 1) public String getDescription() {...} @ProtoField(number = 4, collectionImplementation = LinkedList.class) public Collection<String> getText() {...} @ProtoField(number = 2, defaultValue = "0") public int getIntDefault() {...} @ProtoField(number = 3, type = Type.FIXED32) public Integer getFixed32() {...} // public Getter, Setter, equals and HashCode methods and convenient constructors omitted for brevity } SimpleEntryInitializer.java import org.infinispan.protostream.GeneratedSchema; import org.infinispan.protostream.annotations.AutoProtoSchemaBuilder; @AutoProtoSchemaBuilder(includeClasses = { SimpleEntry.class }, schemaFileName = "simple.proto", schemaFilePath = "proto", schemaPackageName = "example") public interface SimpleEntryInitializer extends GeneratedSchema { } Important observations Field 2 is defined as int which the ProtoStream marshaller in versions did not check. Because the Java int field is not nullable the ProtoStream processor will fail. The Java int field must be required or initialized with a defaultValue . From a Java application perspective, the int field is initialized with "0" so you can use defaultValue without any impact as any put operation will set it. Change to required is not a problem from the stored data perspective if always present, but it might cause issues for different clients. Field 3 must be explicitly set to Type.FIXED32 for compatibility. The text collection must be set in the correct order for the resulting Protobuf schema. Important The order of the text collection in your Protobuf schema must be the same before and after migration. Likewise, you must set the fixed32 type during migration. If not, client applications might throw the following exception and fail to start: In other cases, you might observe incomplete or inaccurate results in your cached data. 5.2.2. MessageMarshaller implementation with custom types This section provides an example migration for a MessageMarshaller implementation that contains fields that ProtoStream does not natively handle. The following example uses the BigInteger class but applies to any class, even a Data Grid adapter or a custom class. Note The BigInteger class is immutable so does not have a no-argument constructor. CustomTypeEntry.java import java.math.BigInteger; public class CustomTypeEntry { final String description; final BigInteger bigInt; // public Getter, Setter, equals and HashCode methods and convenient constructors omitted for brevity } CustomTypeEntryMarshaller.java import org.infinispan.protostream.MessageMarshaller; public class CustomTypeEntryMarshaller implements MessageMarshaller<CustomTypeEntry> { @Override public void writeTo(ProtoStreamWriter writer, CustomTypeEntry testEntry) throws IOException { writer.writeString("description", testEntry.description); writer.writeString("bigInt", testEntry.bigInt.toString()); } @Override public CustomTypeEntry readFrom(MessageMarshaller.ProtoStreamReader reader) throws IOException { final String desc = reader.readString("description"); final BigInteger bInt = new BigInteger(reader.readString("bigInt")); return new CustomTypeEntry(desc, bInt); } } CustomTypeEntry.proto syntax = "proto2"; package example; message CustomTypeEntry { required string description = 1; required string bigInt = 2; } Migrated code with an adapter class You can use the ProtoAdapter annotation to marshall a CustomType class in a way that generates Protobuf schema that is compatible with Protobuf schema that you created with MessageMarshaller implementations. With this approach, you: Must not add annotations to the CustomTypeEntry class. Create a CustomTypeEntryAdapter class that uses the @ProtoAdapter annotation to control how the Protobuf schema and marshaller is generated. Include the CustomTypeEntryAdapter class with the @AutoProtoSchemaBuilder annotation. Note Because the AutoProtoSchemaBuilder annotation does not reference the CustomTypeEntry class, any annotations contained in that class are ignored. The following example shows the CustomTypeEntryAdapter class that contains ProtoStream annotations for the CustomTypeEntry class: CustomTypeEntryAdapter.java import java.math.BigInteger; import org.infinispan.protostream.annotations.ProtoAdapter; import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; @ProtoAdapter(CustomTypeEntry.class) public class CustomTypeEntryAdapter { @ProtoFactory public CustomTypeEntry create(String description, String bigInt) { return new CustomTypeEntry(description, new BigInteger(bigInt)); } @ProtoField(number = 1, required = true) public String getDescription(CustomTypeEntry t) { return t.description; } @ProtoField(number = 2, required = true) public String getBigInt(CustomTypeEntry t) { return t.bigInt.toString(); } } The following example shows the SerializationContextInitializer with AutoProtoSchemaBuilder annotations that reference the CustomTypeEntryAdapter class: CustomTypeEntryInitializer.java import org.infinispan.protostream.GeneratedSchema; import org.infinispan.protostream.annotations.AutoProtoSchemaBuilder; @AutoProtoSchemaBuilder(includeClasses = { CustomTypeEntryAdapter.class }, schemaFileName = "custom.proto", schemaFilePath = "proto", schemaPackageName = "example") public interface CustomTypeAdapterInitializer extends GeneratedSchema { } Migrated code without an adapter class Instead of creating an adapter class, you can add ProtoStream annotations directly to the CustomTypeEntry class. Important In this example, the generated Protobuf schema is not compatible with data in caches that was added via the MessageMarshaller interface because the BigInteger is a separate message. Even if the adapter field writes the same String, it is not possible to unmarshall the data. The following example shows the CustomTypeEntry class that directly contains ProtoStream annotations: CustomTypeEntry.java import java.math.BigInteger; public class CustomTypeEntry { @ProtoField(number = 1) final String description; @ProtoField(number = 2) final BigInteger bigInt; @ProtoFactory public CustomTypeEntry(String description, BigInteger bigInt) { this.description = description; this.bigInt = bigInt; } // public Getter, Setter, equals and HashCode methods and convenient constructors omitted for brevity } The following example shows the SerializationContextInitializer with AutoProtoSchemaBuilder annotations that reference the CustomTypeEntry and BigIntegerAdapter classes: CustomTypeEntryInitializer.java import org.infinispan.protostream.GeneratedSchema; import org.infinispan.protostream.annotations.AutoProtoSchemaBuilder; import org.infinispan.protostream.types.java.math.BigIntegerAdapter; @AutoProtoSchemaBuilder(includeClasses = { CustomTypeEntry.class, BigIntegerAdapter.class }, schemaFileName = "customtype.proto", schemaFilePath = "proto", schemaPackageName = "example") public interface CustomTypeInitializer extends GeneratedSchema { } When you generate the Protobuf schema from the preceding SerializationContextInitializer implementation, it results in the following Protobuf schema: CustomTypeEntry.proto syntax = "proto2"; package example; message BigInteger { optional bytes bytes = 1; } message CustomTypeEntry { optional string description = 1; optional BigInteger bigInt = 2; } | [
"class OuterClass { class InnerClass { @ProtoField(1) int someMethod() { } } }",
"class InnerClass { @ProtoField(1) int someMethod() { } }",
"<cache-container> <serialization> <white-list> <class>org.infinispan.test.data.Person</class> <regex>org.infinispan.test.data.*</regex> </white-list> </serialization> </cache-container>",
"<cache-container> <serialization> <allow-list> <class>org.infinispan.test.data.Person</class> <regex>org.infinispan.test.data.*</regex> </allow-list> </serialization> </cache-container>",
"<serialization marshaller=\"org.infinispan.jboss.marshalling.core.JBossUserMarshaller\"/>",
".marshaller(\"org.infinispan.jboss.marshalling.commons.GenericJBossMarshaller\");",
"infinispan.client.hotrod.marshaller = GenericJBossMarshaller",
"public class SimpleEntry { private String description; private Collection<String> text; private int intDefault; private Integer fixed32; // public Getter, Setter, equals and HashCode methods omitted for brevity }",
"import org.infinispan.protostream.MessageMarshaller; public class SimpleEntryMarshaller implements MessageMarshaller<SimpleEntry> { @Override public void writeTo(ProtoStreamWriter writer, SimpleEntry testEntry) throws IOException { writer.writeString(\"description\", testEntry.getDescription()); writer.writeInt(\"intDefault\", testEntry.getIntDefault()); writer.writeInt(\"fix32\", testEntry.getFixed32()); writer.writeCollection(\"text\", testEntry.getText(), String.class); } @Override public SimpleEntry readFrom(MessageMarshaller.ProtoStreamReader reader) throws IOException { SimpleEntry x = new SimpleEntry(); x.setDescription(reader.readString(\"description\")); x.setIntDefault(reader.readInt(\"intDefault\")); x.setFixed32(reader.readInt(\"fix32\")); x.setText(reader.readCollection(\"text\", new LinkedList<String>(), String.class)); return x; } }",
"syntax = \"proto2\"; package example; message SimpleEntry { required string description = 1; optional int32 intDefault = 2; optional fixed32 fix32 = 3; repeated string text = 4; }",
"import org.infinispan.protostream.annotations.ProtoField; import org.infinispan.protostream.descriptors.Type; public class SimpleEntry { private String description; private Collection<String> text; private int intDefault; private Integer fixed32; @ProtoField(number = 1) public String getDescription() {...} @ProtoField(number = 4, collectionImplementation = LinkedList.class) public Collection<String> getText() {...} @ProtoField(number = 2, defaultValue = \"0\") public int getIntDefault() {...} @ProtoField(number = 3, type = Type.FIXED32) public Integer getFixed32() {...} // public Getter, Setter, equals and HashCode methods and convenient constructors omitted for brevity }",
"import org.infinispan.protostream.GeneratedSchema; import org.infinispan.protostream.annotations.AutoProtoSchemaBuilder; @AutoProtoSchemaBuilder(includeClasses = { SimpleEntry.class }, schemaFileName = \"simple.proto\", schemaFilePath = \"proto\", schemaPackageName = \"example\") public interface SimpleEntryInitializer extends GeneratedSchema { }",
"Exception ( ISPN004034: Unable to unmarshall bytes )",
"import java.math.BigInteger; public class CustomTypeEntry { final String description; final BigInteger bigInt; // public Getter, Setter, equals and HashCode methods and convenient constructors omitted for brevity }",
"import org.infinispan.protostream.MessageMarshaller; public class CustomTypeEntryMarshaller implements MessageMarshaller<CustomTypeEntry> { @Override public void writeTo(ProtoStreamWriter writer, CustomTypeEntry testEntry) throws IOException { writer.writeString(\"description\", testEntry.description); writer.writeString(\"bigInt\", testEntry.bigInt.toString()); } @Override public CustomTypeEntry readFrom(MessageMarshaller.ProtoStreamReader reader) throws IOException { final String desc = reader.readString(\"description\"); final BigInteger bInt = new BigInteger(reader.readString(\"bigInt\")); return new CustomTypeEntry(desc, bInt); } }",
"syntax = \"proto2\"; package example; message CustomTypeEntry { required string description = 1; required string bigInt = 2; }",
"import java.math.BigInteger; import org.infinispan.protostream.annotations.ProtoAdapter; import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; @ProtoAdapter(CustomTypeEntry.class) public class CustomTypeEntryAdapter { @ProtoFactory public CustomTypeEntry create(String description, String bigInt) { return new CustomTypeEntry(description, new BigInteger(bigInt)); } @ProtoField(number = 1, required = true) public String getDescription(CustomTypeEntry t) { return t.description; } @ProtoField(number = 2, required = true) public String getBigInt(CustomTypeEntry t) { return t.bigInt.toString(); } }",
"import org.infinispan.protostream.GeneratedSchema; import org.infinispan.protostream.annotations.AutoProtoSchemaBuilder; @AutoProtoSchemaBuilder(includeClasses = { CustomTypeEntryAdapter.class }, schemaFileName = \"custom.proto\", schemaFilePath = \"proto\", schemaPackageName = \"example\") public interface CustomTypeAdapterInitializer extends GeneratedSchema { }",
"import java.math.BigInteger; public class CustomTypeEntry { @ProtoField(number = 1) final String description; @ProtoField(number = 2) final BigInteger bigInt; @ProtoFactory public CustomTypeEntry(String description, BigInteger bigInt) { this.description = description; this.bigInt = bigInt; } // public Getter, Setter, equals and HashCode methods and convenient constructors omitted for brevity }",
"import org.infinispan.protostream.GeneratedSchema; import org.infinispan.protostream.annotations.AutoProtoSchemaBuilder; import org.infinispan.protostream.types.java.math.BigIntegerAdapter; @AutoProtoSchemaBuilder(includeClasses = { CustomTypeEntry.class, BigIntegerAdapter.class }, schemaFileName = \"customtype.proto\", schemaFilePath = \"proto\", schemaPackageName = \"example\") public interface CustomTypeInitializer extends GeneratedSchema { }",
"syntax = \"proto2\"; package example; message BigInteger { optional bytes bytes = 1; } message CustomTypeEntry { optional string description = 1; optional BigInteger bigInt = 2; }"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/migrating_to_data_grid_8/application-migration |
Preface | Preface As a developer of business decisions , you can use Red Hat build of OptaPlanner to develop solvers that determine the optimal solution to planning problems. OptaPlanner is a built-in component of Red Hat Decision Manager. You can use solvers as part of your services in Red Hat Decision Manager to optimize limited resources with specific constraints. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/pr01 |
Chapter 4. Logging 5.8 | Chapter 4. Logging 5.8 4.1. Logging 5.8 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 4.1.1. Logging 5.8.4 This release includes OpenShift Logging Bug Fix Release 5.8.4 . 4.1.1.1. Bug fixes Before this update, the developer console's logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, all supported OCP versions ensure correct namespace inclusion. ( LOG-4905 ) Before this update, the Cluster Logging Operator deployed ClusterRoles supporting LokiStack deployments only when the default log output was LokiStack. With this update, the roles are split into two groups: read and write. The write roles deploys based on the setting of the default log storage, just like all the roles used to do before. The read roles deploys based on whether the logging console plugin is active. ( LOG-4987 ) Before this update, multiple ClusterLogForwarders defining the same input receiver name had their service endlessly reconciled because of changing ownerReferences on one service. With this update, each receiver input will have its own service named with the convention of <CLF.Name>-<input.Name> . ( LOG-5009 ) Before this update, the ClusterLogForwarder did not report errors when forwarding logs to cloudwatch without a secret. With this update, the following error message appears when forwarding logs to cloudwatch without a secret: secret must be provided for cloudwatch output . ( LOG-5021 ) Before this update, the log_forwarder_input_info included application , infrastructure , and audit input metric points. With this update, http is also added as a metric point. ( LOG-5043 ) 4.1.1.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2022-3545 CVE-2022-24963 CVE-2022-36402 CVE-2022-41858 CVE-2023-2166 CVE-2023-2176 CVE-2023-3777 CVE-2023-3812 CVE-2023-4015 CVE-2023-4622 CVE-2023-4623 CVE-2023-5178 CVE-2023-5363 CVE-2023-5388 CVE-2023-5633 CVE-2023-6679 CVE-2023-7104 CVE-2023-27043 CVE-2023-38409 CVE-2023-40283 CVE-2023-42753 CVE-2023-43804 CVE-2023-45803 CVE-2023-46813 CVE-2024-20918 CVE-2024-20919 CVE-2024-20921 CVE-2024-20926 CVE-2024-20945 CVE-2024-20952 4.1.2. Logging 5.8.3 This release includes Logging Bug Fix 5.8.3 and Logging Security Fix 5.8.3 4.1.2.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. ( LOG-4969 ) Before this update, Loki outputs configured without a valid URL caused the collector pods to crash. With this update, outputs are subject to URL validation, resolving the issue. ( LOG-4822 ) Before this update the Cluster Logging Operator would generate collector configuration fields for outputs that did not specify a secret to use the service account bearer token. With this update, an output does not require authentication, resolving the issue. ( LOG-4962 ) Before this update, the tls.insecureSkipVerify field of an output was not set to a value of true without a secret defined. With this update, a secret is no longer required to set this value. ( LOG-4963 ) Before this update, output configurations allowed the combination of an insecure (HTTP) URL with TLS authentication. With this update, outputs configured for TLS authentication require a secure (HTTPS) URL. ( LOG-4893 ) 4.1.2.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2023-7104 CVE-2023-27043 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 4.1.3. Logging 5.8.2 This release includes OpenShift Logging Bug Fix Release 5.8.2 . 4.1.3.1. Bug fixes Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4890 ) Before this update, the developer console logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, namespace inclusion has been corrected, resolving the issue. ( LOG-4947 ) Before this update, the logging view plugin of the OpenShift Container Platform web console did not allow for custom node placement and tolerations. With this update, defining custom node placements and tolerations has been added to the logging view plugin of the OpenShift Container Platform web console. ( LOG-4912 ) 4.1.3.2. CVEs CVE-2022-44638 CVE-2023-1192 CVE-2023-5345 CVE-2023-20569 CVE-2023-26159 CVE-2023-39615 CVE-2023-45871 4.1.4. Logging 5.8.1 This release includes OpenShift Logging Bug Fix Release 5.8.1 and OpenShift Logging Bug Fix Release 5.8.1 Kibana . 4.1.4.1. Enhancements 4.1.4.1.1. Log Collection With this update, while configuring Vector as a collector, you can add logic to the Red Hat OpenShift Logging Operator to use a token specified in the secret in place of the token associated with the service account. ( LOG-4780 ) With this update, the BoltDB Shipper Loki dashboards are now renamed to Index dashboards. ( LOG-4828 ) 4.1.4.2. Bug fixes Before this update, the ClusterLogForwarder created empty indices after enabling the parsing of JSON logs, even when the rollover conditions were not met. With this update, the ClusterLogForwarder skips the rollover when the write-index is empty. ( LOG-4452 ) Before this update, the Vector set the default log level incorrectly. With this update, the correct log level is set by improving the enhancement of regular expression, or regexp , for log level detection. ( LOG-4480 ) Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4683 ) Before this update, Fluentd collector pods were in a CrashLoopBackOff state due to binding of the Prometheus server on IPv6 clusters. With this update, the collectors work properly on IPv6 clusters. ( LOG-4706 ) Before this update, the Red Hat OpenShift Logging Operator would undergo numerous reconciliations whenever there was a change in the ClusterLogForwarder . With this update, the Red Hat OpenShift Logging Operator disregards the status changes in the collector daemonsets that triggered the reconciliations. ( LOG-4741 ) Before this update, the Vector log collector pods were stuck in the CrashLoopBackOff state on IBM Power machines. With this update, the Vector log collector pods start successfully on IBM Power architecture machines. ( LOG-4768 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Fluentd collector pods. With this update, the log collector service account is used by default for authentication, using the associated token and ca.crt . ( LOG-4791 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Vector collector pods. With this update, the log collector service account is used by default for authentication and also using the associated token and ca.crt . ( LOG-4852 ) Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. ( LOG-4811 ) Before this update, it was necessary to create a ClusterRoleBinding to collect audit permissions for HTTP receiver inputs. With this update, it is not necessary to create the ClusterRoleBinding because the endpoint already depends upon the cluster certificate authority. ( LOG-4815 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4836 ) Before this update, while removing the inputs.receiver section in the ClusterLogForwarder , the HTTP input services and its associated secrets were not deleted. With this update, the HTTP input resources are deleted when not needed. ( LOG-4612 ) Before this update, the ClusterLogForwarder indicated validation errors in the status, but the outputs and the pipeline status did not accurately reflect the specific issues. With this update, the pipeline status displays the validation failure reasons correctly in case of misconfigured outputs, inputs, or filters. ( LOG-4821 ) Before this update, changing a LogQL query that used controls such as time range or severity changed the label matcher operator defining it like a regular expression. With this update, regular expression operators remain unchanged when updating the query. ( LOG-4841 ) 4.1.4.3. CVEs CVE-2007-4559 CVE-2021-3468 CVE-2021-3502 CVE-2021-3826 CVE-2021-43618 CVE-2022-3523 CVE-2022-3565 CVE-2022-3594 CVE-2022-4285 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1076 CVE-2023-1079 CVE-2023-1206 CVE-2023-1249 CVE-2023-1252 CVE-2023-1652 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-2731 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3316 CVE-2023-3358 CVE-2023-3576 CVE-2023-3609 CVE-2023-3772 CVE-2023-3773 CVE-2023-4016 CVE-2023-4128 CVE-2023-4155 CVE-2023-4194 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4273 CVE-2023-4641 CVE-2023-22745 CVE-2023-26545 CVE-2023-26965 CVE-2023-26966 CVE-2023-27522 CVE-2023-29491 CVE-2023-29499 CVE-2023-30456 CVE-2023-31486 CVE-2023-32324 CVE-2023-32573 CVE-2023-32611 CVE-2023-32665 CVE-2023-33203 CVE-2023-33285 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-34410 CVE-2023-35825 CVE-2023-36054 CVE-2023-37369 CVE-2023-38197 CVE-2023-38545 CVE-2023-38546 CVE-2023-39191 CVE-2023-39975 CVE-2023-44487 4.1.5. Logging 5.8.0 This release includes OpenShift Logging Bug Fix Release 5.8.0 and OpenShift Logging Bug Fix Release 5.8.0 Kibana . 4.1.5.1. Deprecation notice In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. 4.1.5.2. Enhancements 4.1.5.2.1. Log Collection With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs . ( LOG-3819 ) With this update, you can deploy multiple, isolated, and RBAC-protected ClusterLogForwarder custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. ( LOG-1343 ) Important In order to support multi-cluster log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations. With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the Logging and the output limits put a ceiling on the rate of logs shipped to a given data store. ( LOG-884 ) With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. ( LOG-4562 ) With this update, you can configure audit policies to control which Kubernetes and OpenShift API server events are forwarded by the log collector. ( LOG-3982 ) 4.1.5.2.2. Log Storage With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. ( LOG-3841 ) With this update, the Loki Operator introduces PodDisruptionBudget configuration on LokiStack deployments to ensure normal operations during OpenShift Container Platform cluster restarts by keeping ingestion and the query path available. ( LOG-3839 ) With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies. ( LOG-3840 ) With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. ( LOG-3266 ) With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for OpenShift Container Platform clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). ( LOG-4329 ) With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. ( LOG-4327 ) 4.1.5.2.3. Log Console With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. ( LOG-3856 ) With this update, OpenShift Container Platform application owners can receive notifications for application log-based alerts on the OpenShift Container Platform web console Developer perspective for OpenShift Container Platform version 4.14 and later. ( LOG-3548 ) 4.1.5.3. Known Issues Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the Red Hat OpenShift Logging Operator. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the RFC 5746 extension. As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end. Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. ( LOG-4609 ) Currently, when using FluentD as the collector, the collector pod cannot start on the OpenShift Container Platform IPv6-enabled cluster. The pod logs produce the fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known error. There is currently no workaround for this issue. ( LOG-4706 ) Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. ( LOG-4709 ) Currently, must-gather cannot gather any logs on a FIPS-enabled cluster, because the required OpenSSL library is not available in the cluster-logging-rhel9-operator . There is currently no workaround for this issue. ( LOG-4403 ) Currently, when deploying the logging version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in CrashLoopBackOff status, while using FluentD as a collector. There is currently no workaround for this issue. ( LOG-3933 ) 4.1.5.4. CVEs CVE-2023-40217 4.2. Installing Logging OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. Important You must install the Red Hat OpenShift Logging Operator after the log store Operator. You deploy logging by installing the Loki Operator to manage your log store, followed by the Red Hat OpenShift Logging Operator to manage the components of logging. You can use either the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to install or configure logging. Tip You can alternatively apply all example objects. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. 4.2.1. Installing Logging and the Loki Operator using the CLI To install and configure logging on your OpenShift Container Platform cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OpenShift Container Platform CLI. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for the Red Hat OpenShift Logging Operator: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-logging: "true" openshift.io/cluster-monitoring: "true" 2 1 The Red Hat OpenShift Logging Operator is only deployable to the openshift-logging namespace. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 1 You must specify the openshift-logging namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging CR object: Example ClusterLogging CR object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Apply the ClusterLogging CR object by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 4.2.2. Installing Logging and the Loki Operator using the web console To install and configure logging on your OpenShift Container Platform cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OperatorHub within the web console. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Install the Red Hat OpenShift Logging Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Important It is not possible to change the number 1x for the deployment size. Click Create . Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. Additional resources About OVN-Kubernetes network policy | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/logging/logging-5-8 |
Part V. Servers | Part V. Servers This part discusses various topics related to servers such as how to set up a web server or share files and directories over a network. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/part-servers |
Chapter 6. Filtering in the API | Chapter 6. Filtering in the API The system recognizes a collection as a "queryset". You can filter this by using various operators. Procedure To find groups that contain the name "foo", use the following: http://<controller server name>/api/v2/groups/?name__contains=foo To find an exact match, use the following: http://<controller server name>/api/v2/groups/?name=foo If a resource is of an integer type, you must add \_\_int to the end to cast your string input value to an integer, such as the following: http://<controller server name>/api/v2/arbitrary_resource/?x__int=5 You can query related resources with the following: http://<controller server name>/api/v2/users/?first_name__icontains=kim This returns all users with names that include the string "Kim" in them. You can also filter against many fields at once: http://<controller server name>/api/v2/groups/?name__icontains=test&has_active_failures=false This finds all groups containing the name "test" that have no active failures. Additional resources For more information about what types of operators are available, see QuerySet API reference . Note You can also watch the API as the UI is being used to see how it is filtering on various criteria. 6.1. Advanced queries in the API You can use additional query string parameters used to filter the list of results returned to those matching a given value. You can only use fields and relations that exist in the database for filtering. Ensure that any special characters in the specified value are URL-encoded. For example: ?field=value%20xyz Fields can also span relations, only for fields and relationships defined in the database: ?other__field=value To exclude results matching certain criteria, prefix the field parameter with not__ : ?not__field=value By default, all query string filters are AND'ed together, so only the results matching all filters are returned. To combine results matching any one of multiple criteria, prefix each query string parameter with or__ : ?or__field=value&or__field=othervalue ?or__not__field=value&or__field=othervalue The default AND filtering applies all filters simultaneously to each related object being filtered across database relationships. The chain filter instead applies filters separately for each related object. To use this, prefix the query string parameter with chain__ : ?chain__related__field=value&chain__related__field2=othervalue ?chain__not__related__field=value&chain__related__field2=othervalue If you write the first query as ?related field=value&related field2=othervalue , it returns only the primary objects where the same related object satisfied both conditions. As written by using the chain filter, it would return the intersection of primary objects matching each condition. 6.2. Field lookups You can use field lookups for more advanced queries, by appending the lookup to the field name: ?field__lookup=value The following field lookups are supported: exact: Exact match (default lookup if not specified, see the following note for more information). iexact: Case-insensitive version of exact. contains: Field contains value. icontains: Case-insensitive version of contains. startswith: Field starts with value. istartswith: Case-insensitive version of startswith. endswith: Field ends with value. iendswith: Case-insensitive version of endswith. regex: Field matches the given regular expression. iregex: Case-insensitive version of regular expression. gt: Greater than comparison. gte: Greater than or equal to comparison. lt: Less than comparison. lte: Less than or equal to comparison. isnull: Check whether the given field or related object is null; expects a boolean value. in: Check whether the given field's value is present in the list provided; expects a list of items. You can specify boolean values as True or 1 for true, False or 0 for false (both case-insensitive). For example, ?created__gte=2023-01-01 provides a list of items created after 1/1/2023. You can specify null values as None or Null (both case-insensitive), though we recommend using the isnull lookup to explicitly check for null values. You can specify lists (for the in lookup) as a comma-separated list of values. Filtering based on the requesting user's level of access by query string parameter: role_level : Level of role to filter on, such as admin_role Note Earlier releases of Ansible Automation Platform returned queries with _exact results by default. As a workaround, set the limit to ?limit_exact for the default filter. For example, /api/v2/jobs/?limit_exact=example.domain.com results in: { "count": 1, "": null, "": null, "results": [ ... | [
"http://<controller server name>/api/v2/groups/?name__contains=foo",
"http://<controller server name>/api/v2/groups/?name=foo",
"http://<controller server name>/api/v2/arbitrary_resource/?x__int=5",
"http://<controller server name>/api/v2/users/?first_name__icontains=kim",
"http://<controller server name>/api/v2/groups/?name__icontains=test&has_active_failures=false",
"?field=value%20xyz",
"?other__field=value",
"?not__field=value",
"?or__field=value&or__field=othervalue ?or__not__field=value&or__field=othervalue",
"?chain__related__field=value&chain__related__field2=othervalue ?chain__not__related__field=value&chain__related__field2=othervalue",
"?field__lookup=value",
"{ \"count\": 1, \"next\": null, \"previous\": null, \"results\": ["
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_api_overview/controller-api-filter |
36.2. Preparing to Upgrade | 36.2. Preparing to Upgrade Before upgrading the kernel, take a few precautionary steps. The first step is to make sure working boot media exists for the system in case a problem occurs. If the boot loader is not configured properly to boot the new kernel, the system cannot be booted into Red Hat Enterprise Linux without working boot media. For example, to create a boot diskette, login as root, and type the following command at a shell prompt: Note Refer to the mkbootdisk man page for more options. Creating bootable media via CD-Rs, CD-RWs, and USB flash drives are also supported given the system BIOS also supports it. Reboot the machine with the boot media and verify that it works before continuing. Hopefully, the media is not needed, but store it in a safe place just in case. To determine which kernel packages are installed, execute the following command at a shell prompt: The output contains some or all of the following packages, depending on the system's architecture (the version numbers and packages may differ): From the output, determine which packages need to be download for the kernel upgrade. For a single processor system, the only required package is the kernel package. Refer to Section 36.1, "Overview of Kernel Packages" for descriptions of the different packages. In the file name, each kernel package contains the architecture for which the package was built. The format is kernel- <variant> - <version> . <arch> .rpm, where <variant> is smp , utils , or so forth. The <arch> is one of the following: x86_64 for the AMD64 architecture ia64 for the Intel (R) Itanium TM architecture ppc64 for the IBM (R) eServer TM pSeries TM architecture ppc64 for the IBM (R) eServer TM iSeries TM architecture s390 for the IBM (R) S/390 (R) architecture s390x for the IBM (R) eServer TM zSeries (R) architecture x86 variant: The x86 kernels are optimized for different x86 versions. The options are as follows: i686 for Intel (R) Pentium (R) II, Intel (R) Pentium (R) III, Intel (R) Pentium (R) 4, AMD Athlon (R), and AMD Duron (R) systems | [
"/sbin/mkbootdisk `uname -r`",
"-qa | grep kernel",
"kernel-2.6.9-5.EL kernel-devel-2.6.9-5.EL kernel-utils-2.6.9-5.EL kernel-doc-2.6.9-5.EL kernel-smp-2.6.9-5.EL kernel-smp-devel-2.6.9-5.EL kernel-hugemem-devel-2.6.9-5.EL"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/manually_upgrading_the_kernel-preparing_to_upgrade |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/installing_and_using_red_hat_build_of_openjdk_17_on_rhel/making-open-source-more-inclusive |
probe::signal.flush | probe::signal.flush Name probe::signal.flush - Flushing all pending signals for a task Synopsis Values name Name of the probe point task The task handler of the process performing the flush pid_name The name of the process associated with the task performing the flush sig_pid The PID of the process associated with the task performing the flush | [
"signal.flush"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-flush |
34.2. Configuring Automount | 34.2. Configuring Automount in Identity Management, configuring automount entries like locations and maps requires an existing autofs/NFS server. Creating automount entries does not create the underlying autofs configuration. Autofs can be configured manually using LDAP or SSSD as a data store, or it can be configured automatically. Note Before changing the automount configuration, test that for at least one user, their /home directory can be mounted from the command line successfully. Making sure that NFS is working properly makes it easier to troubleshoot any potential IdM automount configuration errors later. 34.2.1. Configuring NFS Automatically After a system is configured as an IdM client, which includes IdM servers and replicas that are configured as domain clients as part of their configuration, autofs can be configured to use the IdM domain as its NFS domain and have autofs services enabled. By default, the ipa-client-automount utility automatically configures the NFS configuration files, /etc/sysconfig/nfs and /etc/idmapd.conf . It also configures SSSD to manage the credentials for NFS. If the ipa-client-automount command is run without any options, it runs a DNS discovery scan to identify an available IdM server and creates a default location called default . It is possible to specify an IdM server to use and to create an automount location other than default: Along with setting up NFS, the ipa-client-automount utility configures SSSD to cache automount maps, in case the external IdM store is ever inaccessible. Configuring SSSD does two things: It adds service configuration information to the SSSD configuration. The IdM domain entry is given settings for the autofs provider and the mount location. And NFS is added to the list of supported services ( services = nss,pam,autofs... ) and given a blank configuration entry ( [autofs] ). The Name Service Switch (NSS) service information is updated to check SSSD first for automount information, and then the local files. There may be some instances, such as highly secure environments, where it is not appropriate for a client to cache automount maps. In that case, the ipa-client-automount command can be run with the --no-sssd option, which changes all of the required NFS configuration files, but does not change the SSSD configuration. If --no-sssd is used, the list of configuration files updated by ipa-client-automount is different: The command updates /etc/sysconfig/autofs instead of /etc/sysconfig/nfs . The command configures /etc/autofs_ldap_auth.conf with the IdM LDAP configuration. The command configures /etc/nsswitch.conf to use the LDAP services for automount maps. Note The ipa-client-automount command can only be run once. If there is an error in the configuration, than the configuration files need to be edited manually. 34.2.2. Configuring autofs Manually to Use SSSD and Identity Management Edit the /etc/sysconfig/autofs file to specify the schema attributes that autofs searches for: Specify the LDAP configuration. There are two ways to do this. The simplest is to let the automount service discover the LDAP server and locations on its own: Alternatively, explicitly set which LDAP server to use and the base DN for LDAP searches: Note The default value for location is default . If additional locations are added ( Section 34.5, "Configuring Locations" ), then the client can be pointed to use those locations, instead. Edit the /etc/autofs_ldap_auth.conf file so that autofs allows client authentication with the IdM LDAP server. Change authrequired to yes. Set the principal to the Kerberos host principal for the NFS client server, host/fqdn@REALM . The principal name is used to connect to the IdM directory as part of GSS client authentication. <autofs_ldap_sasl_conf usetls="no" tlsrequired="no" authrequired="yes" authtype="GSSAPI" clientprinc="host/[email protected]" /> If necessary, run klist -k to get the exact host principal information. Configure autofs as one of the services which SSSD manages. Open the SSSD configuration file. Add the autofs service to the list of services handled by SSSD. Create a new [autofs] section. This can be left blank; the default settings for an autofs service work with most infrastructures. Optionally, set a search base for the autofs entries. By default, this is the LDAP search base, but a subtree can be specified in the ldap_autofs_search_base parameter. Restart SSSD: Check the /etc/nsswitch.conf file, so that SSSD is listed as a source for automount configuration: Restart autofs: Test the configuration by listing a user's /home directory: If this does not mount the remote file system, check the /var/log/messages file for errors. If necessary, increase the debug level in the /etc/sysconfig/autofs file by setting the LOGGING parameter to debug . Note If there are problems with automount, then cross-reference the automount attempts with the 389 Directory Server access logs for the IdM instance, which will show the attempted access, user, and search base. It is also simple to run automount in the foreground with debug logging on. This prints the debug log information directly, without having to cross-check the LDAP access log with automount's log. 34.2.3. Configuring Automount on Solaris Note Solaris uses a different schema for autofs configuration than the schema used by Identity Management. Identity Management uses the 2307bis-style automount schema which is defined for 389 Directory Server (and used in IdM's internal Directory Server instance). If the NFS server is running on Red Hat Enterprise Linux, specify on the Solaris machine that NFSv3 is the maximum supported version. Edit the /etc/default/nfs file and set the following parameter: Use the ldapclient command to configure the host to use LDAP: Enable automount : Test the configuration. Check the LDAP configuration: List a user's /home directory: | [
"ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/nsswitch.conf Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs",
"ipa-client-automount --server=ipaserver.example.com --location=boston",
"autofs_provider = ipa ipa_automount_location = default",
"automount: sss files",
"ipa-client-automount --no-sssd",
"# Other common LDAP naming # MAP_OBJECT_CLASS=\"automountMap\" ENTRY_OBJECT_CLASS=\"automount\" MAP_ATTRIBUTE=\"automountMapName\" ENTRY_ATTRIBUTE=\"automountKey\" VALUE_ATTRIBUTE=\"automountInformation\"",
"LDAP_URI=\"ldap:///dc=example,dc=com\"",
"LDAP_URI=\"ldap://ipa.example.com\" SEARCH_BASE=\"cn= location ,cn=automount,dc=example,dc=com\"",
"<autofs_ldap_sasl_conf usetls=\"no\" tlsrequired=\"no\" authrequired=\"yes\" authtype=\"GSSAPI\" clientprinc=\"host/[email protected]\" />",
"vim /etc/sssd/sssd.conf",
"[sssd] services = nss,pam, autofs",
"[nss] [pam] [sudo] [autofs] [ssh] [pac]",
"[domain/EXAMPLE] ldap_search_base = \"dc=example,dc=com\" ldap_autofs_search_base = \"ou=automount,dc=example,dc=com\"",
"systemctl restart sssd.service",
"automount: sss files",
"systemctl restart autofs.service",
"ls /home/ userName",
"automount -f -d",
"NFS_CLIENT_VERSMAX=3",
"ldapclient -v manual -a authenticationMethod=none -a defaultSearchBase=dc=example,dc=com -a defaultServerList=ipa.example.com -a serviceSearchDescriptor=passwd:cn=users,cn=accounts,dc=example,dc=com -a serviceSearchDescriptor=group:cn=groups,cn=compat,dc=example,dc=com -a serviceSearchDescriptor=auto_master:automountMapName=auto.master,cn= location ,cn=automount,dc=example,dc=com?one -a serviceSearchDescriptor=auto_home:automountMapName=auto_home,cn= location ,cn=automount,dc=example,dc=com?one -a objectClassMap=shadow:shadowAccount=posixAccount -a searchTimelimit=15 -a bindTimeLimit=5",
"svcadm enable svc:/system/filesystem/autofs",
"ldapclient -l auto_master dn: automountkey=/home,automountmapname=auto.master,cn= location ,cn=automount,dc=example,dc=com objectClass: automount objectClass: top automountKey: /home automountInformation: auto.home",
"ls /home/ userName"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring-automount |
20.4. Fonts | 20.4. Fonts fonts-tweak-tool A new tool, fonts-tweak-tool enables users to configure the default fonts per language. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-internationalization-fonts |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.