title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 17. Connecting to the installation system
|
Chapter 17. Connecting to the installation system After the Initial Program Load (IPL) of the Anaconda installation program is complete, connect to the IBM Z system from a local machine, as an 'install' user, using an ssh connection. You need to connect to the installation system to continue the installation process. Use a VNC mode to run a GUI-based installation or use the established connection to run a text mode installation. Additional resources: For more information about installing VNC and various VNC modes for GUI-based installation, see Chapter 25, Using VNC 17.1. Setting up remote connection using VNC From a local machine, run the steps below to set up a remote connection with the IBM Z system. Prequisite: The initial program boot is complete on the IBM Z system, and the command prompt displays: If you want to restrict VNC access to the installation system, then ensure inst.vncpassword= PASSWORD boot parameter is configured. On the command prompt, run the following command: OR Depending on whether or not have you configured the inst.vnc parameter, the ssh session displays the following output: When inst.vnc parameter is configured: When inst.vnc parameter is not configured: If you have configured the inst.vnc parameter, proceed to step 5. Enter 1 to start VNC. Enter a password, if you have not set the inst.vncpassword= boot option, but want to secure the server connection. From a new command prompt, connect to the VNC server. If you have secured the connection, use the password that you have entered in the step or the one that you had set for inst.vncpassword= boot option. The RHEL installer is launched in the VNC client.
|
[
"Starting installer, one moment Please ssh install@ my-z-system ( system ip address ) to begin the install.",
"ssh install@ my-z-system-domain-name",
"ssh install@ my-z-system-IP-address",
"Starting installer, one moment Please manually connect your vnc client to my-z-system:1 (system-ip-address:1) to begin the install.",
"Starting installer, one moment Graphical installation is not available. Starting text mode. ============= Text mode provides a limited set of installation options. It does not offer custom partitioning for full control over the disk layout. Would you like to use VNC mode instead? 1) Start VNC 2) Use text mode Please make your choice from above ['q' to quit | 'c' to continue | 'r' to refresh]:",
"vncviewer my-z-system-ip-address:display_number"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-connecting-installation-system-s390
|
Chapter 7. Managing user accounts using Ansible playbooks
|
Chapter 7. Managing user accounts using Ansible playbooks You can manage users in IdM using Ansible playbooks. After presenting the user life cycle , this chapter describes how to use Ansible playbooks for the following operations: Ensuring the presence of a single user listed directly in the YML file. Ensuring the presence of multiple users listed directly in the YML file. Ensuring the presence of multiple users listed in a JSON file that is referenced from the YML file. Ensuring the absence of users listed directly in the YML file. 7.1. User life cycle Identity Management (IdM) supports three user account states: Stage users are not allowed to authenticate. This is an initial state. Some of the user account properties required for active users cannot be set, for example, group membership. Active users are allowed to authenticate. All required user account properties must be set in this state. Preserved users are former active users that are considered inactive and cannot authenticate to IdM. Preserved users retain most of the account properties they had as active users, but they are not part of any user groups. You can delete user entries permanently from the IdM database. Important Deleted user accounts cannot be restored. When you delete a user account, all the information associated with the account is permanently lost. A new administrator can only be created by a user with administrator rights, such as the default admin user. If you accidentally delete all administrator accounts, the Directory Manager must create a new administrator manually in the Directory Server. Warning Do not delete the admin user. As admin is a pre-defined user required by IdM, this operation causes problems with certain commands. If you want to define and use an alternative admin user, disable the pre-defined admin user with ipa user-disable admin after you granted admin permissions to at least one different user. Warning Do not add local users to IdM. The Name Service Switch (NSS) always resolves IdM users and groups before resolving local users and groups. This means that, for example, IdM group membership does not work for local users. 7.2. Ensuring the presence of an IdM user using an Ansible playbook The following procedure describes ensuring the presence of a user in IdM using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the data of the user whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/add-user.yml file. For example, to create user named idm_user and add Password123 as the user password: --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: "{{ ipaadmin_password }}" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: "+555123457" email: [email protected] passwordexpiration: "2023-01-19 23:59:59" password: "Password123" update_password: on_create You must use the following options to add a user: name : the login name first : the first name string last : the last name string For the full list of available user options, see the /usr/share/doc/ansible-freeipa/README-user.md Markdown file. Note If you use the update_password: on_create option, Ansible only creates the user password when it creates the user. If the user is already created with a password, Ansible does not generate a new password. Run the playbook: Verification You can verify if the new user account exists in IdM by using the ipa user-show command: Log into ipaserver as admin: Request a Kerberos ticket for admin: Request information about idm_user : The user named idm_user is present in IdM. 7.3. Ensuring the presence of multiple IdM users using Ansible playbooks The following procedure describes ensuring the presence of multiple users in IdM using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the data of the users whose presence you want to ensure in IdM. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-users-present.yml file. For example, to create users idm_user_1 , idm_user_2 , and idm_user_3 , and add Password123 as the password of idm_user_1 : --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: "{{ ipaadmin_password }}" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: "+555123457" email: [email protected] passwordexpiration: "2023-01-19 23:59:59" password: "Password123" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011 Note If you do not specify the update_password: on_create option, Ansible re-sets the user password every time the playbook is run: if the user has changed the password since the last time the playbook was run, Ansible re-sets password. Run the playbook: Verification You can verify if the user account exists in IdM by using the ipa user-show command: Log into ipaserver as administrator: Display information about idm_user_1 : The user named idm_user_1 is present in IdM. 7.4. Ensuring the presence of multiple IdM users from a JSON file using Ansible playbooks The following procedure describes how you can ensure the presence of multiple users in IdM using an Ansible playbook. The users are stored in a JSON file. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary tasks. Reference the JSON file with the data of the users whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/README-user.md file: Create the users.json file, and add the IdM users into it. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/README-user.md file. For example, to create users idm_user_1 , idm_user_2 , and idm_user_3 , and add Password123 as the password of idm_user_1 : { "users": [ { "name": "idm_user_1", "first": "First 1", "last": "Last 1", "password": "Password123" }, { "name": "idm_user_2", "first": "First 2", "last": "Last 2" }, { "name": "idm_user_3", "first": "First 3", "last": "Last 3" } ] } Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification You can verify if the user accounts are present in IdM using the ipa user-show command: Log into ipaserver as administrator: Display information about idm_user_1 : The user named idm_user_1 is present in IdM. 7.5. Ensuring the absence of users using Ansible playbooks The following procedure describes how you can use an Ansible playbook to ensure that specific users are absent from IdM. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the users whose absence from IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-users-present.yml file. For example, to delete users idm_user_1 , idm_user_2 , and idm_user_3 : --- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: "{{ ipaadmin_password }}" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification You can verify that the user accounts do not exist in IdM by using the ipa user-show command: Log into ipaserver as administrator: Request information about idm_user_1 : The user named idm_user_1 does not exist in IdM. 7.6. Additional resources See the README-user.md Markdown file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/user directory.
|
[
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" update_password: on_create",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-IdM-user.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa user-show idm_user User login: idm_user First name: Alice Last name: Acme .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure users' presence hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Include users_present.json include_vars: file: users_present.json - name: Users present ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: \"{{ users }}\"",
"{ \"users\": [ { \"name\": \"idm_user_1\", \"first\": \"First 1\", \"last\": \"Last 1\", \"password\": \"Password123\" }, { \"name\": \"idm_user_2\", \"first\": \"First 2\", \"last\": \"Last 2\" }, { \"name\": \"idm_user_3\", \"first\": \"First 3\", \"last\": \"Last 3\" } ] }",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /ensure-users-present-jsonfile.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /delete-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 ipa: ERROR: idm_user_1: user not found"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/managing-user-accounts-using-ansible-playbooks_using-ansible-to-install-and-manage-identity-management
|
Chapter 3. Configuring certificates
|
Chapter 3. Configuring certificates 3.1. Replacing the default ingress certificate 3.1.1. Understanding the default ingress certificate By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain. 3.1.2. Replacing the default ingress certificate You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate. Prerequisites You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain> . The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Copy the root CA certificate into an additional PEM format file. Verify that all certificates which include -----END CERTIFICATE----- also end with one carriage return after that line. Important Updating the certificate authority (CA) causes the nodes in your cluster to reboot. Procedure Create a config map that includes only the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Create a secret that contains the wildcard certificate chain and key: USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-ingress 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the Ingress Controller configuration with the newly created secret: USD oc patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \ 1 -n openshift-ingress-operator 1 Replace <secret> with the name used for the secret in the step. Important To trigger the Ingress Operator to perform a rolling update, you must update the name of the secret. Because the kubelet automatically propagates changes to the secret in the volume mount, updating the secret contents does not trigger a rolling update. For more information, see this Red Hat Knowledgebase Solution . Additional resources Replacing the CA Bundle certificate Proxy certificate customization 3.2. Adding API server certificates The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server's certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust. 3.2.1. Add an API server named certificate The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used. Prerequisites You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing the FQDN. The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Warning Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain> ). Doing so will leave your cluster in a degraded state. Procedure Login to the new API as the kubeadmin user. USD oc login -u kubeadmin -p <password> https://FQDN:6443 Get the kubeconfig file. USD oc config view --flatten > kubeconfig-newapi Create a secret that contains the certificate chain and private key in the openshift-config namespace. USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-config 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the API server to reference the created secret. USD oc patch apiserver cluster \ --type=merge -p \ '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["<FQDN>"], 1 "servingCertificate": {"name": "<secret>"}}]}}}' 2 1 Replace <FQDN> with the FQDN that the API server should provide the certificate for. Do not include the port number. 2 Replace <secret> with the name used for the secret in the step. Examine the apiserver/cluster object and confirm the secret is now referenced. USD oc get apiserver cluster -o yaml Example output ... spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret> ... Check the kube-apiserver operator, and verify that a new revision of the Kubernetes API server rolls out. It may take a minute for the operator to detect the configuration change and trigger a new deployment. While the new revision is rolling out, PROGRESSING will report True . USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.12.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Note A new revision of the Kubernetes API server only rolls out if the API server named certificate is added for the first time. When the API server named certificate is renewed, a new revision of the Kubernetes API server does not roll out because the kube-apiserver pods dynamically reload the updated certificate. 3.3. Securing service traffic using service serving certificate secrets 3.3.1. Understanding service serving certificates Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates. The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the service CA expires. Note You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done 3.3.2. Add a service certificate To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service. The generated certificate is only valid for the internal service DNS name <service.name>.<service.namespace>.svc , and is only valid for internal communications. If your service is a headless service (no clusterIP value set), the generated certificate also contains a wildcard subject in the format of *.<service.name>.<service.namespace>.svc . Important Because the generated certificates contain wildcard subjects for headless services, you must not use the service CA if your client must differentiate between individual pods. In this case: Generate individual TLS certificates by using a different CA. Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates. Prerequisites You must have a service defined. Procedure Annotate the service with service.beta.openshift.io/serving-cert-secret-name : USD oc annotate service <service_name> \ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2 1 Replace <service_name> with the name of the service to secure. 2 <secret_name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service_name> . For example, use the following command to annotate the service test1 : USD oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1 Examine the service to confirm that the annotations are present: USD oc describe service <service_name> Example output ... Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837 ... After the cluster generates a secret for your service, your Pod spec can mount it, and the pod will run after it becomes available. Additional resources You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate . 3.3.3. Add the service CA bundle to a config map A pod can access the service CA certificate by mounting a ConfigMap object that is annotated with service.beta.openshift.io/inject-cabundle=true . Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the config map. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates. Important After adding this annotation to a config map all existing data in it is deleted. It is recommended to use a separate config map to contain the service-ca.crt , instead of using the same config map that stores your pod configuration. Procedure Annotate the config map with service.beta.openshift.io/inject-cabundle=true : USD oc annotate configmap <config_map_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <config_map_name> with the name of the config map to annotate. Note Explicitly referencing the service-ca.crt key in a volume mount will prevent a pod from starting until the config map has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume's serving certificate configuration. For example, use the following command to annotate the config map test1 : USD oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true View the config map to ensure that the service CA bundle has been injected: USD oc get configmap <config_map_name> -o yaml The CA bundle is displayed as the value of the service-ca.crt key in the YAML output: apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- ... 3.3.4. Add the service CA bundle to an API service You can annotate an APIService object with service.beta.openshift.io/inject-cabundle=true to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Procedure Annotate the API service with service.beta.openshift.io/inject-cabundle=true : USD oc annotate apiservice <api_service_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <api_service_name> with the name of the API service to annotate. For example, use the following command to annotate the API service test1 : USD oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true View the API service to ensure that the service CA bundle has been injected: USD oc get apiservice <api_service_name> -o yaml The CA bundle is displayed in the spec.caBundle field in the YAML output: apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: caBundle: <CA_BUNDLE> ... 3.3.5. Add the service CA bundle to a custom resource definition You can annotate a CustomResourceDefinition (CRD) object with service.beta.openshift.io/inject-cabundle=true to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD's webhook is secured with a service CA certificate. Procedure Annotate the CRD with service.beta.openshift.io/inject-cabundle=true : USD oc annotate crd <crd_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <crd_name> with the name of the CRD to annotate. For example, use the following command to annotate the CRD test1 : USD oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true View the CRD to ensure that the service CA bundle has been injected: USD oc get crd <crd_name> -o yaml The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE> ... 3.3.6. Add the service CA bundle to a mutating webhook configuration You can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the mutating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <mutating_webhook_name> with the name of the mutating webhook configuration to annotate. For example, use the following command to annotate the mutating webhook configuration test1 : USD oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the mutating webhook configuration to ensure that the service CA bundle has been injected: USD oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.7. Add the service CA bundle to a validating webhook configuration You can annotate a ValidatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the validating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate validatingwebhookconfigurations <validating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <validating_webhook_name> with the name of the validating webhook configuration to annotate. For example, use the following command to annotate the validating webhook configuration test1 : USD oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the validating webhook configuration to ensure that the service CA bundle has been injected: USD oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.8. Manually rotate the generated service certificate You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate. Prerequisites A secret containing the certificate and key pair must have been generated for the service. Procedure Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below. USD oc describe service <service_name> Example output ... service.beta.openshift.io/serving-cert-secret-name: <secret> ... Delete the generated secret for the service. This process will automatically recreate the secret. USD oc delete secret <secret> 1 1 Replace <secret> with the name of the secret from the step. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE . USD oc get secret <service_name> Example output NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s 3.3.9. Manually rotate the service CA certificate The service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA by using the following procedure. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. Prerequisites You must be logged in as a cluster admin. Procedure View the expiration date of the current service CA certificate by using the following command. USD oc get secrets/signing-key -n openshift-service-ca \ -o template='{{index .data "tls.crt"}}' \ | base64 --decode \ | openssl x509 -noout -enddate Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates. USD oc delete secret/signing-key -n openshift-service-ca To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Warning This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted. 3.4. Updating the CA bundle Important Updating the certificate authority (CA) will cause the nodes of your cluster to reboot. 3.4.1. Understanding the CA Bundle certificate Proxy certificates allow users to specify one or more custom certificate authority (CA) used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 3.4.2. Replacing the CA Bundle certificate Procedure Create a config map that includes the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Additional resources Replacing the default ingress certificate Enabling the cluster-wide proxy Proxy certificate customization
|
[
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress",
"oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator",
"oc login -u kubeadmin -p <password> https://FQDN:6443",
"oc config view --flatten > kubeconfig-newapi",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config",
"oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2",
"oc get apiserver cluster -o yaml",
"spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.12.0 True False False 145m",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2",
"oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1",
"oc describe service <service_name>",
"Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837",
"oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true",
"oc get configmap <config_map_name> -o yaml",
"apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----",
"oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true",
"oc get apiservice <api_service_name> -o yaml",
"apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>",
"oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true",
"oc get crd <crd_name> -o yaml",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc describe service <service_name>",
"service.beta.openshift.io/serving-cert-secret-name: <secret>",
"oc delete secret <secret> 1",
"oc get secret <service_name>",
"NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s",
"oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate",
"oc delete secret/signing-key -n openshift-service-ca",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_and_compliance/configuring-certificates
|
Chapter 1. Red Hat Quay tenancy model
|
Chapter 1. Red Hat Quay tenancy model Before creating repositories to contain your container images in Red Hat Quay, you should consider how these repositories will be structured. With Red Hat Quay, each repository requires a connection with either an Organization or a User . This affiliation defines ownership and access control for the repositories. 1.1. Tenancy model Organizations provide a way of sharing repositories under a common namespace that does not belong to a single user. Instead, these repositories belong to several users in a shared setting, such as a company. Teams provide a way for an Organization to delegate permissions. Permissions can be set at the global level (for example, across all repositories), or on specific repositories. They can also be set for specific sets, or groups, of users. Users can log in to a registry through the web UI or a by using a client like Podman and using their respective login commands, for example, USD podman login . Each user automatically gets a user namespace, for example, <quay-server.example.com>/<user>/<username> , or quay.io/<username> if you are using Quay.io. Superusers have enhanced access and privileges through the Super User Admin Panel in the user interface. Superuser API calls are also available, which are not visible or accessible to normal users. Robot accounts provide automated access to repositories for non-human users like pipeline tools. Robot accounts are similar to OpenShift Container Platform Service Accounts . Permissions can be granted to a robot account in a repository by adding that account like you would another user or team.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/user-org-intro_use-quay
|
Chapter 5. Changes in microservices patterns
|
Chapter 5. Changes in microservices patterns This section explains the changes in microservices patterns. 5.1. Changes in Eclipse Vert.x circuit breaker The following section describes the changes in Eclipse Vert.x circuit breaker. 5.1.1. Removed execute command methods in circuit breaker The following methods have been removed from the CircuitBreaker class because they cannot be used with futures. Removed methods Replacing methods CircuitBreaker.executeCommand() CircuitBreaker.execute() CircuitBreaker.executeCommandWithFallback() CircuitBreaker.executeWithFallback() 5.2. Changes in Eclipse Vert.x service discovery The following section describes the changes in Eclipse Vert.x service discovery. 5.2.1. Removed create methods from service discovery that contain ServiceDiscovery argument The following create methods in service discovery that have Handler<AmqpMessage> as an argument have been removed. These methods cannot be used with futures. Removed methods Replacing methods ServiceDiscovery.create(... , Handler<ServiceDiscovery> completionHandler) ServiceDiscovery.create(Vertx) ServiceDiscovery.create(... , Handler<ServiceDiscovery> completionHandler) ServiceDiscovery.create(Vertx, ServiceDiscoveryOptions) 5.2.2. Service importer and exporter methods are no longer fluent The ServiceDiscovery.registerServiceImporter() and ServiceDiscovery.registerServiceExporter() methods are no longer fluent. The methods return Future<Void> . 5.2.3. Kubernetes service importer is no longer registered automatically The vertx-service-discovery-bridge-kubernetes adds the KubernetesServiceImporter discovery bridge. The bridge imports services from Kubernetes or Openshift into the Eclipse Vert.x service discovery. From Eclipse Vert.x 4, this bridge is no longer registered automatically. Even if you have added the bridge in the classpath of your Maven project, it will not be automatically registered. You must manually register the bridge after creating the ServiceDiscovery instance. The following example shows you how to manually register the bridge. JsonObject defaultConf = new JsonObject(); serviceDiscovery.registerServiceImporter(new KubernetesServiceImporter(), defaultConf);
|
[
"JsonObject defaultConf = new JsonObject(); serviceDiscovery.registerServiceImporter(new KubernetesServiceImporter(), defaultConf);"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_4.3_migration_guide/changes-in-microservices-patterns_vertx
|
Chapter 10. QEMU-img and QEMU Guest Agent
|
Chapter 10. QEMU-img and QEMU Guest Agent This chapter contain useful hints and tips for using the qemu-img package with guest virtual machines. If you are looking for information on QEMU trace events and arguments, refer to the README file located here: /usr/share/doc/qemu-*/README.systemtap . 10.1. Using qemu-img The qemu-img command line tool is used for formatting, modifying and verifying various file systems used by KVM. qemu-img options and usages are listed below. Check Perform a consistency check on the disk image filename . Note Only the qcow2 and vdi formats support consistency checks. Using the -r tries to repair any inconsistencies that are found during the check, but when used with -r leaks cluster leaks are repaired and when used with -r all all kinds of errors are fixed. Note that this has a risk of choosing the wrong fix or hiding corruption issues that may have already occurred. Commit Commits any changes recorded in the specified file ( filename ) to the file's base image with the qemu-img commit command. Optionally, specify the file's format type ( format ). Convert The convert option is used to convert one recognized image format to another image format. Command format: The -p parameter shows the progress of the command (optional and not for every command) and -S option allows for the creation of a sparse file , which is included within the disk image. Sparse files in all purposes function like a standard file, except that the physical blocks that only contain zeros (nothing). When the Operating System sees this file, it treats it as it exists and takes up actual disk space, even though in reality it does not take any. This is particularly helpful when creating a disk for a guest virtual machine as this gives the appearance that the disk has taken much more disk space than it has. For example, if you set -S to 50Gb on a disk image that is 10Gb, then your 10Gb of disk space will appear to be 60Gb in size even though only 10Gb is actually being used. Convert the disk image filename to disk image output_filename using format output_format . The disk image can be optionally compressed with the -c option, or encrypted with the -o option by setting -o encryption . Note that the options available with the -o parameter differ with the selected format. Only the qcow2 format supports encryption or compression. qcow2 encryption uses the AES format with secure 128-bit keys. qcow2 compression is read-only, so if a compressed sector is converted from qcow2 format, it is written to the new format as uncompressed data. Image conversion is also useful to get a smaller image when using a format which can grow, such as qcow or cow . The empty sectors are detected and suppressed from the destination image. Create Create the new disk image filename of size size and format format . If a base image is specified with -o backing_file= filename , the image will only record differences between itself and the base image. The backing file will not be modified unless you use the commit command. No size needs to be specified in this case. Preallocation is an option that may only be used with creating qcow2 images. Accepted values include -o preallocation= off|meta|full|falloc . Images with preallocated metadata are larger than images without. However in cases where the image size increases, performance will improve as the image grows. It should be noted that using full allocation can take a long time with large images. In cases where you want full allocation and time is of the essence, using falloc will save you time. Info The info parameter displays information about a disk image filename . The format for the info option is as follows: This command is often used to discover the size reserved on disk which can be different from the displayed size. If snapshots are stored in the disk image, they are displayed also. This command will show for example, how much space is being taken by a qcow2 image on a block device. This is done by running the qemu-img . You can check that the image in use is the one that matches the output of the qemu-img info command with the qemu-img check command. Refer to Section 10.1, "Using qemu-img" . Map The # qemu-img map [-f format ] [--output= output_format ] filename command dumps the metadata of the image filename and its backing file chain. Specifically, this command dumps the allocation state of every sector of a specified file, together with the topmost file that allocates it in the backing file chain. For example, if you have a chain such as c.qcow2 b.qcow2 a.qcow2, a.qcow is the original file, b.qcow2 is the changes made to a.qcow2 and c.qcow2 is the delta file from b.qcow2. When this chain is created the image files stores the normal image data, plus information about what is in which file and where it is located within the file. This information is referred to as the image's metadata. The -f format option is the format of the specified image file. Formats such as raw, qcow2, vhdx and vmdk may be used. There are two output options possible: human and json . human is the default setting. It is designed to be more readable to the human eye, and as such, this format should not be parsed. For clarity and simplicity, the default human format only dumps known-nonzero areas of the file. Known-zero parts of the file are omitted altogether, and likewise for parts that are not allocated throughout the chain. When the command is executed, qemu-img output will identify a file from where the data can be read, and the offset in the file. The output is displayed as a table with four columns; the first three of which are hexadecimal numbers. json , or JSON (JavaScript Object Notation), is readable by humans, but as it is a programming language, it is also designed to be parsed. For example, if you want to parse the output of "qemu-img map" in a parser then you should use the option --output=json . For more information on the JSON format, refer to the qemu-img(1) man page. Rebase Changes the backing file of an image. The backing file is changed to backing_file and (if the format of filename supports the feature), the backing file format is changed to backing_format . Note Only the qcow2 format supports changing the backing file (rebase). There are two different modes in which rebase can operate: Safe and Unsafe . Safe mode is used by default and performs a real rebase operation. The new backing file may differ from the old one and the qemu-img rebase command will take care of keeping the guest virtual machine-visible content of filename unchanged. In order to achieve this, any clusters that differ between backing_file and old backing file of filename are merged into filename before making any changes to the backing file. Note that safe mode is an expensive operation, comparable to converting an image. The old backing file is required for it to complete successfully. Unsafe mode is used if the -u option is passed to qemu-img rebase . In this mode, only the backing file name and format of filename is changed, without any checks taking place on the file contents. Make sure the new backing file is specified correctly or the guest-visible content of the image will be corrupted. This mode is useful for renaming or moving the backing file. It can be used without an accessible old backing file. For instance, it can be used to fix an image whose backing file has already been moved or renamed. Resize Change the disk image filename as if it had been created with size size . Only images in raw format can be resized regardless of version. Red Hat Enterprise Linux 6.1 and later adds the ability to grow (but not shrink) images in qcow2 format. Use the following to set the size of the disk image filename to size bytes: You can also resize relative to the current size of the disk image. To give a size relative to the current size, prefix the number of bytes with + to grow, or - to reduce the size of the disk image by that number of bytes. Adding a unit suffix allows you to set the image size in kilobytes (K), megabytes (M), gigabytes (G) or terabytes (T). Warning Before using this command to shrink a disk image, you must use file system and partitioning tools inside the VM itself to reduce allocated file systems and partition sizes accordingly. Failure to do so will result in data loss. After using this command to grow a disk image, you must use file system and partitioning tools inside the VM to actually begin using the new space on the device. Snapshot List, apply, create, or delete an existing snapshot ( snapshot ) of an image ( filename ). -l lists all snapshots associated with the specified disk image. The apply option, -a , reverts the disk image ( filename ) to the state of a previously saved snapshot . -c creates a snapshot ( snapshot ) of an image ( filename ). -d deletes the specified snapshot. Supported Formats qemu-img is designed to convert files to one of the following formats: raw Raw disk image format (default). This can be the fastest file-based format. If your file system supports holes (for example in ext2 or ext3 on Linux or NTFS on Windows), then only the written sectors will reserve space. Use qemu-img info to obtain the real size used by the image or ls -ls on Unix/Linux. Although Raw images give optimal performance, only very basic features are available with a Raw image (for example, no snapshots are available). qcow2 QEMU image format, the most versatile format with the best feature set. Use it to have optional AES encryption, zlib-based compression, support of multiple VM snapshots, and smaller images, which are useful on file systems that do not support holes (non-NTFS file systems on Windows). Note that this expansive feature set comes at the cost of performance. Although only the formats above can be used to run on a guest virtual machine or host physical machine machine, qemu-img also recognizes and supports the following formats in order to convert from them into either raw or qcow2 format. The format of an image is usually detected automatically. In addition to converting these formats into raw or qcow2 , they can be converted back from raw or qcow2 to the original format. bochs Bochs disk image format. cloop Linux Compressed Loop image, useful only to reuse directly compressed CD-ROM images present for example in the Knoppix CD-ROMs. cow User Mode Linux Copy On Write image format. The cow format is included only for compatibility with versions. It does not work with Windows. dmg Mac disk image format. nbd Network block device. parallels Parallels virtualization disk image format. qcow Old QEMU image format. Only included for compatibility with older versions. vdi Oracle VM VirtualBox hard disk image format. vmdk VMware compatible image format (read-write support for versions 1 and 2, and read-only support for version 3). vpc Windows Virtual PC disk image format. Also referred to as vhd , or Microsoft virtual hard disk image format. vvfat Virtual VFAT disk image format.
|
[
"qemu-img check -f qcow2 --output=qcow2 -r all filename-img.qcow2",
"qemu-img commit [-f format ] [-t cache ] filename",
"qemu-img convert [-c] [-p] [-f format ] [-t cache ] [-O output_format ] [-o options ] [-S sparse_size ] filename output_filename",
"qemu-img create [-f format ] [-o options ] filename [ size ][ preallocation ]",
"qemu-img info [-f format ] filename",
"qemu-img info /dev/vg-90.100-sluo/lv-90-100-sluo image: /dev/vg-90.100-sluo/lv-90-100-sluo file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 0 cluster_size: 65536",
"qemu-img map -f qcow2 --output=human /tmp/test.qcow2 Offset Length Mapped to File 0 0x20000 0x50000 /tmp/test.qcow2 0x100000 0x80000 0x70000 /tmp/test.qcow2 0x200000 0x1f0000 0xf0000 /tmp/test.qcow2 0x3c00000 0x20000 0x2e0000 /tmp/test.qcow2 0x3fd0000 0x10000 0x300000 /tmp/test.qcow2",
"qemu-img map -f qcow2 --output=json /tmp/test.qcow2 [{ \"start\": 0, \"length\": 131072, \"depth\": 0, \"zero\": false, \"data\": true, \"offset\": 327680}, { \"start\": 131072, \"length\": 917504, \"depth\": 0, \"zero\": true, \"data\": false},",
"qemu-img rebase [-f format ] [-t cache ] [-p] [-u] -b backing_file [-F backing_format ] filename",
"qemu-img resize filename size",
"qemu-img resize filename [+|-] size [K|M|G|T]",
"qemu-img snapshot [ -l | -a snapshot | -c snapshot | -d snapshot ] filename"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-Virtualization_Administration_Guide-Tips_and_tricks
|
Chapter 1. Device Mapper Multipathing
|
Chapter 1. Device Mapper Multipathing Device mapper multipathing (DM Multipath) allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths. This chapter provides a summary of the features of DM-Multipath that were added subsequent to the initial release of Red Hat Enterprise Linux 7. Following that, this chapter provides a high level overview of DM Multipath and its components, as well as an overview of DM-Multipath setup. 1.1. New and Changed Features This section lists features of the DM Multipath that are new since the initial release of Red Hat Enterprise Linux 7. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 7.1 Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and changes. Table 5.1, "Useful multipath Command Options" . now includes entries for the -w and -W options of the multipath command, which allow you to better manage the wwids file. Additional options for the values argument of the features parameter in the multipath.conf file are documented in Chapter 4, The DM Multipath Configuration File . Table 4.1, "Multipath Configuration Defaults" . includes an entry for the force_sync parameter, which prevents path checkers from running in async mode when set to "yes". In addition, small technical corrections and clarifications have been made throughout the document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2 Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and changes. This document includes a new section, Section 5.1, "Automatic Configuration File Generation with Multipath Helper" . The Multipath Helper application gives you options to create multipath configurations with custom aliases, device blacklists, and settings for the characteristics of individual multipath devices. The defaults section of the multipath.conf configuration file supports the new config_dir , new_bindings_in_boot , ignore_new_boot_devs , retrigger_tries , and retrigger_delays parameters. The defaults section of the multipath.conf file is documented in Table 4.1, "Multipath Configuration Defaults" . The defaults , devices , and multipaths sections of the multipath.conf configuration file now support the delay_watch_checks and delay_wait_checks configuration parameters. For information on the configuration parameters, see Chapter 4, The DM Multipath Configuration File . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and changes. The multipathd command supports new format commands that show the status of multipath devices and paths in "raw" format versions. In raw format, no headers are printed and the fields are not padded to align the columns with the headers. Instead, the fields print exactly as specified in the format string. For information on the multipathd commands, see Section 5.11, "The multipathd Commands" . As of Red Hat Enterprise Linux 7.3, if you specify prio "alua exclusive_pref_bit" in your device configuration, multipath will create a path group that contains only the path with the pref bit set and will give that path group the highest priority. For information on the configuration parameters, see Chapter 4, The DM Multipath Configuration File . The defaults , devices , and multipaths sections of the multipath.conf configuration file now support the skip_kpartx configuration parameter. For information on the configuration parameters, see Chapter 4, The DM Multipath Configuration File . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 7.4 Red Hat Enterprise Linux 7.4 includes the following documentation and feature updates and changes. The defaults , devices , and multipaths sections of the multipath.conf configuration file support the max_sectors_kb configuration parameter. For information on the configuration parameters, see Chapter 4, The DM Multipath Configuration File . The defaults and devices sections of the multipath.conf configuration file support the detect_path_checker configuration parameter. For information on the configuration parameters, see Chapter 4, The DM Multipath Configuration File . The defaults section of the multipath.conf configuration file supports the remove_retries and detect_path_checker parameters. The defaults section of the multipath.conf file is documented in Table 4.1, "Multipath Configuration Defaults" . 1.1.5. New and Changed Features for Red Hat Enterprise Linux 7.5 Red Hat Enterprise Linux 7.5 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 7.5, the blacklist and blacklist_exceptions section of the multipath.conf configuration file support the property parameter. For information on the property parameter, see Section 4.2.6, "Blacklist Exceptions" . the defaults and multipaths sections of the multipath.conf file now support a value of file for the reservation_key parameter. For information on the configuration parameters, see Chapter 4, The DM Multipath Configuration File . The defaults section of the multipath.conf configuration file supports the prkeys_file parameter. The defaults section of the multipath.conf file is documented in Table 4.1, "Multipath Configuration Defaults" . 1.1.6. New and Changed Features for Red Hat Enterprise Linux 7.6 Red Hat Enterprise Linux 7.6 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 7.6, you can specify the protocol for a device to be excluded from multipathing in the blacklist section of the configuration file with a protocol section. For information on blacklisting by device protocol, see Section 4.2.5, "Blacklisting By Device Protocol (Red Hat Enterprise Linux 7.6 and Later)" . The defaults and devices sections of the multipath.conf configuration file support the all_tg_pt parameter. For information on the configuration parameters, see Chapter 4, The DM Multipath Configuration File .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/MPIO_Overview
|
Chapter 1. Working with systemd unit files
|
Chapter 1. Working with systemd unit files The systemd unit files represent your system resources. As a system administrator, you can perform the following advanced tasks: Create custom unit files Modify existing unit files Work with instantiated units 1.1. Introduction to unit files A unit file contains configuration directives that describe the unit and define its behavior. Several systemctl commands work with unit files in the background. To make finer adjustments, you can edit or create unit files manually. You can find three main directories where unit files are stored on the system, the /etc/systemd/system/ directory is reserved for unit files created or customized by the system administrator. Unit file names take the following form: Here, unit_name stands for the name of the unit and type_extension identifies the unit type. For example, you can find an sshd.service as well as an sshd.socket unit present on your system. Unit files can be supplemented with a directory for additional configuration files. For example, to add custom configuration options to sshd.service , create the sshd.service.d/custom.conf file and insert additional directives there. For more information on configuration directories, see Modifying existing unit files . The systemd system and service manager can also create the sshd.service.wants/ and sshd.service.requires/ directories. These directories contain symbolic links to unit files that are dependencies of the sshd service. systemd creates the symbolic links automatically either during installation according to [Install] unit file options or at runtime based on [Unit] options. You can also create these directories and symbolic links manually. Also, the sshd.service.wants/ and sshd.service.requires/ directories can be created. These directories contain symbolic links to unit files that are dependencies of the sshd service. The symbolic links are automatically created either during installation according to [Install] unit file options or at runtime based on [Unit] options. It is also possible to create these directories and symbolic links manually. For more details on [Install] and [Unit] options, see the tables below. Many unit file options can be set using the so called unit specifiers - wildcard strings that are dynamically replaced with unit parameters when the unit file is loaded. This enables creation of generic unit files that serve as templates for generating instantiated units. See Working with instantiated units . 1.2. Systemd unit files locations You can find the unit configuration files in one of the following directories: Table 1.1. systemd unit files locations Directory Description /usr/lib/systemd/system/ systemd unit files distributed with installed RPM packages. /run/systemd/system/ systemd unit files created at run time. This directory takes precedence over the directory with installed service unit files. /etc/systemd/system/ systemd unit files created by using the systemctl enable command as well as unit files added for extending a service. This directory takes precedence over the directory with runtime unit files. The default configuration of systemd is defined during the compilation and you can find the configuration in the /etc/systemd/system.conf file. By editing this file, you can modify the default configuration by overriding values for systemd units globally. For example, to override the default value of the timeout limit, which is set to 90 seconds, use the DefaultTimeoutStartSec parameter to input the required value in seconds. 1.3. Unit file structure Unit files typically consist of three following sections: The [Unit] section Contains generic options that are not dependent on the type of the unit. These options provide unit description, specify the unit's behavior, and set dependencies to other units. For a list of most frequently used [Unit] options, see Important [Unit] section options . The [Unit type] section Contains type-specific directives, these are grouped under a section named after the unit type. For example, service unit files contain the [Service] section. The [Install] section Contains information about unit installation used by systemctl enable and disable commands. For a list of options for the [Install] section, see Important [Install] section options . Additional resources Important [Unit] section options Important [Service] section options Important [Install] section options 1.4. Important [Unit] section options The following tables lists important options of the [Unit] section. Table 1.2. Important [Unit] section options Option [a] Description Description A meaningful description of the unit. This text is displayed for example in the output of the systemctl status command. Documentation Provides a list of URIs referencing documentation for the unit. After [b] Defines the order in which units are started. The unit starts only after the units specified in After are active. Unlike Requires , After does not explicitly activate the specified units. The Before option has the opposite functionality to After . Requires Configures dependencies on other units. The units listed in Requires are activated together with the unit. If any of the required units fail to start, the unit is not activated. Wants Configures weaker dependencies than Requires . If any of the listed units does not start successfully, it has no impact on the unit activation. This is the recommended way to establish custom unit dependencies. Conflicts Configures negative dependencies, an opposite to Requires . [a] For a complete list of options configurable in the [Unit] section, see the systemd.unit(5) manual page. [b] In most cases, it is sufficient to set only the ordering dependencies with After and Before unit file options. If you also set a requirement dependency with Wants (recommended) or Requires , the ordering dependency still needs to be specified. That is because ordering and requirement dependencies work independently from each other. 1.5. Important [Service] section options The following tables lists important options of the [Service] section. Table 1.3. Important [Service] section options Option [a] Description Type Configures the unit process startup type that affects the functionality of ExecStart and related options. One of: * simple - The default value. The process started with ExecStart is the main process of the service. * forking - The process started with ExecStart spawns a child process that becomes the main process of the service. The parent process exits when the startup is complete. * oneshot - This type is similar to simple , but the process exits before starting consequent units. * dbus - This type is similar to simple , but consequent units are started only after the main process gains a D-Bus name. * notify - This type is similar to simple , but consequent units are started only after a notification message is sent via the sd_notify() function. * idle - similar to simple , the actual execution of the service binary is delayed until all jobs are finished, which avoids mixing the status output with shell output of services. ExecStart Specifies commands or scripts to be executed when the unit is started. ExecStartPre and ExecStartPost specify custom commands to be executed before and after ExecStart . Type=oneshot enables specifying multiple custom commands that are then executed sequentially. ExecStop Specifies commands or scripts to be executed when the unit is stopped. ExecReload Specifies commands or scripts to be executed when the unit is reloaded. Restart With this option enabled, the service is restarted after its process exits, with the exception of a clean stop by the systemctl command. RemainAfterExit If set to True, the service is considered active even when all its processes exited. Default value is False. This option is especially useful if Type=oneshot is configured. [a] For a complete list of options configurable in the [Service] section, see the systemd.service(5) manual page. 1.6. Important [Install] section options The following tables lists important options of the [Install] section. Table 1.4. Important [Install] section options Option [a] Description Alias Provides a space-separated list of additional names for the unit. Most systemctl commands, excluding systemctl enable , can use aliases instead of the actual unit name. RequiredBy A list of units that depend on the unit. When this unit is enabled, the units listed in RequiredBy gain a Require dependency on the unit. WantedBy A list of units that weakly depend on the unit. When this unit is enabled, the units listed in WantedBy gain a Want dependency on the unit. Also Specifies a list of units to be installed or uninstalled along with the unit. DefaultInstance Limited to instantiated units, this option specifies the default instance for which the unit is enabled. See Working with instantiated units . [a] For a complete list of options configurable in the [Install] section, see the systemd.unit(5) manual page. 1.7. Creating custom unit files There are several use cases for creating unit files from scratch: you could run a custom daemon, create a second instance of some existing service as in Creating a custom unit file by using the second instance of the sshd service On the other hand, if you intend just to modify or extend the behavior of an existing unit, use the instructions from Modifying existing unit files . Procedure To create a custom service, prepare the executable file with the service. The file can contain a custom-created script, or an executable delivered by a software provider. If required, prepare a PID file to hold a constant PID for the main process of the custom service. You can also include environment files to store shell variables for the service. Make sure the source script is executable (by executing the chmod a+x ) and is not interactive. Create a unit file in the /etc/systemd/system/ directory and make sure it has correct file permissions. Execute as root : Replace <name> with a name of the service you want to created. Note that the file does not need to be executable. Open the created <name> .service file and add the service configuration options. You can use various options depending on the type of service you wish to create, see Unit file structure . The following is an example unit configuration for a network-related service: <service_description> is an informative description that is displayed in journal log files and in the output of the systemctl status command. the After setting ensures that the service is started only after the network is running. Add a space-separated list of other relevant services or targets. path_to_executable stands for the path to the actual service executable. Type=forking is used for daemons that make the fork system call. The main process of the service is created with the PID specified in path_to_pidfile . Find other startup types in Important [Service] section options . WantedBy states the target or targets that the service should be started under. Think of these targets as of a replacement of the older concept of runlevels. Notify systemd that a new <name> .service file exists: Warning Always execute the systemctl daemon-reload command after creating new unit files or modifying existing unit files. Otherwise, the systemctl start or systemctl enable commands could fail due to a mismatch between states of systemd and actual service unit files on disk. Note, that on systems with a large number of units this can take a long time, as the state of each unit has to be serialized and subsequently deserialized during the reload. 1.8. Creating a custom unit file by using the second instance of the sshd service If you need to configure and run multiple instances of a service, you can create copies of the original service configuration files and modifying certain parameters to avoid conflicts with the primary instance of the service. Procedure To create a second instance of the sshd service: Create a copy of the sshd_config file that the second daemon will use: Edit the sshd-second_config file created in the step to assign a different port number and PID file to the second daemon: See the sshd_config (5) manual page for more information about Port and PidFile options. Make sure the port you choose is not in use by any other service. The PID file does not have to exist before running the service, it is generated automatically on service start. Create a copy of the systemd unit file for the sshd service: Alter the created sshd-second.service : Modify the Description option: Add sshd.service to services specified in the After option, so that the second instance starts only after the first one has already started: Remove the ExecStartPre=/usr/sbin/sshd-keygen line, the first instance of sshd includes key generation. Add the -f /etc/ssh/sshd-second_config parameter to the sshd command, so that the alternative configuration file is used: After the modifications, the sshd-second.service unit file contains the following settings: If using SELinux, add the port for the second instance of sshd to SSH ports, otherwise the second instance of sshd will be rejected to bind to the port: Enable sshd-second.service to start automatically on boot: Verify if the sshd-second.service is running by using the systemctl status command. Verify if the port is enabled correctly by connecting to the service: Make sure you configure firewall to allow connections to the second instance of sshd . 1.9. Finding the systemd service description You can find descriptive information about the script on the line starting with #description . Use this description together with the service name in the Description option in the [Unit] section of the unit file. The header might contain similar data on the #Short-Description and #Description lines. 1.10. Finding the systemd service dependencies The Linux standard base (LSB) header might contain several directives that form dependencies between services. Most of them are translatable to systemd unit options, see the following table: Table 1.5. Dependency options from the LSB header LSB Option Description Unit File Equivalent Provides Specifies the boot facility name of the service, that can be referenced in other init scripts (with the "USD" prefix). This is no longer needed as unit files refer to other units by their file names. - Required-Start Contains boot facility names of required services. This is translated as an ordering dependency, boot facility names are replaced with unit file names of corresponding services or targets they belong to. For example, in case of postfix , the Required-Start dependency on USDnetwork was translated to the After dependency on network.target. After , Before Should-Start Constitutes weaker dependencies than Required-Start. Failed Should-Start dependencies do not affect the service startup. After , Before Required-Stop , Should-Stop Constitute negative dependencies. Conflicts 1.11. Finding default targets of the service The line starting with #chkconfig contains three numerical values. The most important is the first number that represents the default runlevels in which the service is started. Map these runlevels to equivalent systemd targets. Then list these targets in the WantedBy option in the [Install] section of the unit file. For example, postfix was previously started in runlevels 2, 3, 4, and 5, which translates to multi-user.target and graphical.target. Note that the graphical.target depends on multiuser.target, therefore it is not necessary to specify both. You might find information about default and forbidden runlevels also at #Default-Start and #Default-Stop lines in the LSB header. The other two values specified on the #chkconfig line represent startup and shutdown priorities of the init script. These values are interpreted by systemd if it loads the init script, but there is no unit file equivalent. 1.12. Finding files used by the service Init scripts require loading a function library from a dedicated directory and allow importing configuration, environment, and PID files. Environment variables are specified on the line starting with #config in the init script header, which translates to the EnvironmentFile unit file option. The PID file specified on the #pidfile init script line is imported to the unit file with the PIDFile option. The key information that is not included in the init script header is the path to the service executable, and potentially some other files required by the service. In versions of Red Hat Enterprise Linux, init scripts used a Bash case statement to define the behavior of the service on default actions, such as start , stop , or restart , as well as custom-defined actions. The following excerpt from the postfix init script shows the block of code to be executed at service start. The extensibility of the init script allowed specifying two custom functions, conf_check() and make_aliasesdb() , that are called from the start() function block. On closer look, several external files and directories are mentioned in the above code: the main service executable /usr/sbin/postfix , the /etc/postfix/ and /var/spool/postfix/ configuration directories, as well as the /usr/sbin/postconf/ directory. systemd supports only the predefined actions, but enables executing custom executables with ExecStart , ExecStartPre , ExecStartPost , ExecStop , and ExecReload options. The /usr/sbin/postfix together with supporting scripts are executed on service start. Converting complex init scripts requires understanding the purpose of every statement in the script. Some of the statements are specific to the operating system version, therefore you do not need to translate them. On the other hand, some adjustments might be needed in the new environment, both in unit file as well as in the service executable and supporting files. 1.13. Modifying existing unit files If you want to modify existing unit files proceed to the /etc/systemd/system/ directory. Note that you should not modify the your the default unit files, which your system stores in the /usr/lib/systemd/system/ directory. Procedure Depending on the extent of the required changes, pick one of the following approaches: Create a directory for supplementary configuration files at /etc/systemd/system/ <unit> .d/ . This method is recommended for most use cases. You can extend the default configuration with additional functionality, while still referring to the original unit file. Changes to the default unit introduced with a package upgrade are therefore applied automatically. See Extending the default unit configuration for more information. Create a copy of the original unit file from /usr/lib/systemd/system/`directory in the `/etc/systemd/system/ directory and make changes there. The copy overrides the original file, therefore changes introduced with the package update are not applied. This method is useful for making significant unit changes that should persist regardless of package updates. See Overriding the default unit configuration for details. To return to the default configuration of the unit, delete custom-created configuration files in the /etc/systemd/system/ directory. Apply changes to unit files without rebooting the system: The daemon-reload option reloads all unit files and recreates the entire dependency tree, which is needed to immediately apply any change to a unit file. As an alternative, you can achieve the same result with the following command: If the modified unit file belongs to a running service, restart the service: Important To modify properties, such as dependencies or timeouts, of a service that is handled by a SysV initscript, do not modify the initscript itself. Instead, create a systemd drop-in configuration file for the service as described in: Extending the default unit configuration and Overriding the default unit configuration . Then manage this service in the same way as a normal systemd service. For example, to extend the configuration of the network service, do not modify the /etc/rc.d/init.d/network initscript file. Instead, create new directory /etc/systemd/system/network.service.d/ and a systemd drop-in file /etc/systemd/system/network.service.d/ my_config .conf . Then, put the modified values into the drop-in file. Note: systemd knows the network service as network.service , which is why the created directory must be called network.service.d 1.14. Extending the default unit configuration You can extend the default unit file with additional systemd configuration options. Procedure Create a configuration directory in /etc/systemd/system/ : Replace <name> with the name of the service you want to extend. The syntax applies to all unit types. Create a configuration file with the .conf suffix: Replace <config_name> with the name of the configuration file. This file adheres to the normal unit file structure and you have to specify all directives in the appropriate sections, see Unit file structure . For example, to add a custom dependency, create a configuration file with the following content: The <new_dependency> stands for the unit to be marked as a dependency. Another example is a configuration file that restarts the service after its main process exited, with a delay of 30 seconds: Create small configuration files focused only on one task. Such files can be easily moved or linked to configuration directories of other services. Apply changes to the unit: Example 1.1. Extending the httpd.service configuration To modify the httpd.service unit so that a custom shell script is automatically executed when starting the Apache service, perform the following steps. Create a directory and a custom configuration file: Specify the script you want to execute after the main service process by inserting the following text to the custom_script.conf file: Apply the unit changes:: Note The configuration files from the /etc/systemd/system/ configuration directories take precedence over unit files in /usr/lib/systemd/system/ . Therefore, if the configuration files contain an option that can be specified only once, such as Description or ExecStart , the default value of this option is overridden. Note that in the output of the systemd-delta command, described in Monitoring overridden units ,such units are always marked as [EXTENDED], even though in sum, certain options are actually overridden. 1.15. Overriding the default unit configuration You can make changes to the unit file configuration that will persist after updating the package that provides the unit file. Procedure Copy the unit file to the /etc/systemd/system/ directory by entering the following command as root : Open the copied file with a text editor, and make changes. Apply unit changes: 1.16. Changing the timeout limit You can specify a timeout value per service to prevent a malfunctioning service from freezing the system. Otherwise, the default value for timeout is 90 seconds for normal services and 300 seconds for SysV-compatible services. Procedure To extend timeout limit for the httpd service: Copy the httpd unit file to the /etc/systemd/system/ directory: Open the /etc/systemd/system/httpd.service file and specify the TimeoutStartUSec value in the [Service] section: Reload the systemd daemon: Optional. Verify the new timeout value: Note To change the timeout limit globally, input the DefaultTimeoutStartSec in the /etc/systemd/system.conf file. 1.17. Monitoring overridden units You can display an overview of overridden or modified unit files by using the systemd-delta command. Procedure Display an overview of overridden or modified unit files: For example, the output of the command can look as follows: 1.18. Working with instantiated units You can manage multiple instances of a service by using a single template configuration. You can define a generic template for a unit and generate multiple instances of that unit with specific parameters at runtime. The template is indicated by the at sign (@). Instantiated units can be started from another unit file (using Requires or Wants options), or with the systemctl start command. Instantiated service units are named the following way: The <template_name> stands for the name of the template configuration file. Replace <instance_name> with the name for the unit instance. Several instances can point to the same template file with configuration options common for all instances of the unit. Template unit name has the form of: For example, the following Wants setting in a unit file: first makes systemd search for given service units. If no such units are found, the part between "@" and the type suffix is ignored and systemd searches for the [email protected] file, reads the configuration from it, and starts the services. For example, the [email protected] template contains the following directives: When the [email protected] and [email protected] are instantiated from the above template, Description = is resolved as Getty on ttyA and Getty on ttyB . 1.19. Important unit specifiers You can use the wildcard characters, called unit specifiers , in any unit configuration file. Unit specifiers substitute certain unit parameters and are interpreted at runtime. Table 1.6. Important unit specifiers Unit Specifier Meaning Description %n Full unit name Stands for the full unit name including the type suffix. %N has the same meaning but also replaces the forbidden characters with ASCII codes. %p Prefix name Stands for a unit name with type suffix removed. For instantiated units %p stands for the part of the unit name before the "@" character. %i Instance name Is the part of the instantiated unit name between the "@" character and the type suffix. %I has the same meaning but also replaces the forbidden characters for ASCII codes. %H Host name Stands for the hostname of the running system at the point in time the unit configuration is loaded. %t Runtime directory Represents the runtime directory, which is either /run for the root user, or the value of the XDG_RUNTIME_DIR variable for unprivileged users. For a complete list of unit specifiers, see the systemd.unit(5) manual page. 1.20. Additional resources How to set limits for services in RHEL and systemd How to write a service unit file which enforces that particular services have to be started How to decide what dependencies a systemd service unit definition should have
|
[
"<unit_name> . <type_extension>",
"DefaultTimeoutStartSec= required value",
"touch /etc/systemd/system/ <name> .service chmod 664 /etc/systemd/system/ <name> .service",
"[Unit] Description= <service_description> After=network.target [Service] ExecStart= <path_to_executable> Type=forking PIDFile= <path_to_pidfile> [Install] WantedBy=default.target",
"systemctl daemon-reload systemctl start <name> .service",
"cp /etc/ssh/sshd{,-second}_config",
"Port 22220 PidFile /var/run/sshd-second.pid",
"cp /usr/lib/systemd/system/sshd.service /etc/systemd/system/sshd-second.service",
"Description=OpenSSH server second instance daemon",
"After=syslog.target network.target auditd.service sshd.service",
"ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config USDOPTIONS",
"[Unit] Description=OpenSSH server second instance daemon After=syslog.target network.target auditd.service sshd.service [Service] EnvironmentFile=/etc/sysconfig/sshd ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config USDOPTIONS ExecReload=/bin/kill -HUP USDMAINPID KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target",
"semanage port -a -t ssh_port_t -p tcp 22220",
"systemctl enable sshd-second.service",
"ssh -p 22220 user@server",
"conf_check() { [ -x /usr/sbin/postfix ] || exit 5 [ -d /etc/postfix ] || exit 6 [ -d /var/spool/postfix ] || exit 5 } make_aliasesdb() { if [ \"USD(/usr/sbin/postconf -h alias_database)\" == \"hash:/etc/aliases\" ] then # /etc/aliases.db might be used by other MTA, make sure nothing # has touched it since our last newaliases call [ /etc/aliases -nt /etc/aliases.db ] || [ \"USDALIASESDB_STAMP\" -nt /etc/aliases.db ] || [ \"USDALIASESDB_STAMP\" -ot /etc/aliases.db ] || return /usr/bin/newaliases touch -r /etc/aliases.db \"USDALIASESDB_STAMP\" else /usr/bin/newaliases fi } start() { [ \"USDEUID\" != \"0\" ] && exit 4 # Check that networking is up. [ USD{NETWORKING} = \"no\" ] && exit 1 conf_check # Start daemons. echo -n USD\"Starting postfix: \" make_aliasesdb >/dev/null 2>&1 [ -x USDCHROOT_UPDATE ] && USDCHROOT_UPDATE /usr/sbin/postfix start 2>/dev/null 1>&2 && success || failure USD\"USDprog start\" RETVAL=USD? [ USDRETVAL -eq 0 ] && touch USDlockfile echo return USDRETVAL }",
"systemctl daemon-reload",
"init q",
"systemctl restart <name> .service",
"mkdir /etc/systemd/system/ <name> .service.d/",
"touch /etc/systemd/system/name.service.d/ <config_name> .conf",
"[Unit] Requires= <new_dependency> After= <new_dependency>",
"[Service] Restart=always RestartSec=30",
"systemctl daemon-reload systemctl restart <name> .service",
"mkdir /etc/systemd/system/httpd.service.d/",
"touch /etc/systemd/system/httpd.service.d/custom_script.conf",
"[Service] ExecStartPost=/usr/local/bin/custom.sh",
"systemctl daemon-reload",
"systemctl restart httpd.service",
"cp /usr/lib/systemd/system/ <name> .service /etc/systemd/system/ <name> .service",
"systemctl daemon-reload systemctl restart <name> .service",
"cp /usr/lib/systemd/system/httpd.service /etc/systemd/system/httpd.service",
"[Service] PrivateTmp=true TimeoutStartSec=10 [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl show httpd -p TimeoutStartUSec",
"systemd-delta",
"[EQUIVALENT] /etc/systemd/system/default.target /usr/lib/systemd/system/default.target [OVERRIDDEN] /etc/systemd/system/autofs.service /usr/lib/systemd/system/autofs.service --- /usr/lib/systemd/system/autofs.service 2014-10-16 21:30:39.000000000 -0400 +++ /etc/systemd/system/autofs.service 2014-11-21 10:00:58.513568275 -0500 @@ -8,7 +8,8 @@ EnvironmentFile=-/etc/sysconfig/autofs ExecStart=/usr/sbin/automount USDOPTIONS --pid-file /run/autofs.pid ExecReload=/usr/bin/kill -HUP USDMAINPID -TimeoutSec=180 +TimeoutSec=240 +Restart=Always [Install] WantedBy=multi-user.target [MASKED] /etc/systemd/system/cups.service /usr/lib/systemd/system/cups.service [EXTENDED] /usr/lib/systemd/system/sssd.service /etc/systemd/system/sssd.service.d/journal.conf 4 overridden configuration files found.",
"<template_name> @ <instance_name> .service",
"<unit_name> @.service",
"[email protected] [email protected]",
"[Unit] Description=Getty on %I [Service] ExecStart=-/sbin/agetty --noclear %I USDTERM"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_systemd_unit_files_to_customize_and_optimize_your_system/assembly_working-with-systemd-unit-files_working-with-systemd
|
14.8.12. smbmount
|
14.8.12. smbmount smbmount <//server/share> <mount_point> <-o options> The smbmount program uses the low-level smbmnt program to mount an smbfs file system (Samba share). The mount -t smbfs <//server/share> <mount_point> <-o options> command also works. For example:
|
[
"~]# smbmount //wakko/html /mnt/html -o username=kristin Password: <password> ls -l /mnt/html total 0 -rwxr-xr-x 1 root root 0 Jan 29 08:09 index.html"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-smbmount
|
Preface
|
Preface A reference to the commands available for the unified OpenStack command-line client.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/pr01
|
7.17. bind-dyndb-ldap
|
7.17. bind-dyndb-ldap 7.17.1. RHBA-2015:1259 - bind-dyndb-ldap bug fix update Updated bind-dyndb-ldap packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The dynamic LDAP back end is a plug-in for BIND that provides back-end capabilities for LDAP databases. It features support for dynamic updates and internal caching that helps to reduce the load on LDAP servers. Bug Fixes BZ# 1175318 Previously, the bind-dyndb-ldap 2.x driver (used in Red Hat Enterprise Linux 6.x) did not handle forward zones correctly when it was in the same replication topology as bind-dyndb-ldap 6.x (used in Red Hat Enterprise Linux 7.1). As a consequence, forward zones stopped working on all replicas. The underlying source code has been patched to fix this bug, and forward zones now continue to work in the described situation. BZ# 1142176 The bind-dyndb-ldap library incorrectly compared current time and the expiration time of the Kerberos ticket used for authentication to an LDAP server. As a consequence, the Kerberos ticket was not renewed under certain circumstances, which caused the connection to the LDAP server to fail. The connection failure often happened after a BIND service reload was triggered by the logrotate utility. A patch has been applied to fix this bug, and Kerberos tickets are correctly renewed in this scenario. BZ# 1126841 Prior to this update, the bind-dyndb-ldap plug-in incorrectly locked certain data structures. Consequently, a race condition during forwarder address reconfiguration could cause BIND to terminate unexpectedly. This bug has been fixed, bind-dyndb-ldap now locks data structures properly, and BIND no longer crashes in this scenario. BZ# 1219568 Previously, the bind-dyndb-ldap plug-in incorrectly handled timeouts which occurred during LDAP operations. As a consequence, under very specific circumstances, the BIND daemon could terminate unexpectedly. With this update, bind-dyndb-ldap has been fixed to correctly handle timeouts during LDAP operations and the BIND daemon no longer crashes in this scenario. BZ# 1183805 The documentation for bind-dyndb-ldap-2.3 located in the /usr/share/doc/bind-dyndb-ldap-2.3/README file incorrectly stated that the "idnsAllowTransfer" and "idnsAllowQuery" LDAP attributes are multi-valued. Consequently, users were not able to configure DNS zone transfer and query acess control lists according to the documentation. The documentation has been fixed to explain the correct attribute syntax. Users of bind-dyndb-ldap are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-bind-dyndb-ldap
|
Configuring Red Hat build of OpenJDK 21 on RHEL with FIPS
|
Configuring Red Hat build of OpenJDK 21 on RHEL with FIPS Red Hat build of OpenJDK 21 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel_with_fips/index
|
Chapter 9. Ceph performance counters
|
Chapter 9. Ceph performance counters As a storage administrator, you can gather performance metrics of the Red Hat Ceph Storage cluster. The Ceph performance counters are a collection of internal infrastructure metrics. The collection, aggregation, and graphing of this metric data can be done by an assortment of tools and can be useful for performance analytics. 9.1. Access to Ceph performance counters The performance counters are available through a socket interface for the Ceph Monitors and the OSDs. The socket file for each respective daemon is located under /var/run/ceph , by default. The performance counters are grouped together into collection names. These collections names represent a subsystem or an instance of a subsystem. Here is the full list of the Monitor and the OSD collection name categories with a brief description for each : Monitor Collection Name Categories Cluster Metrics - Displays information about the storage cluster: Monitors, OSDs, Pools, and PGs Level Database Metrics - Displays information about the back-end KeyValueStore database Monitor Metrics - Displays general monitor information Paxos Metrics - Displays information on cluster quorum management Throttle Metrics - Displays the statistics on how the monitor is throttling OSD Collection Name Categories Write Back Throttle Metrics - Displays the statistics on how the write back throttle is tracking unflushed IO Level Database Metrics - Displays information about the back-end KeyValueStore database Objecter Metrics - Displays information on various object-based operations Read and Write Operations Metrics - Displays information on various read and write operations Recovery State Metrics - Displays - Displays latencies on various recovery states OSD Throttle Metrics - Display the statistics on how the OSD is throttling RADOS Gateway Collection Name Categories Object Gateway Client Metrics - Displays statistics on GET and PUT requests Objecter Metrics - Displays information on various object-based operations Object Gateway Throttle Metrics - Display the statistics on how the OSD is throttling 9.2. Display the Ceph performance counters The ceph daemon DAEMON_NAME perf schema command outputs the available metrics. Each metric has an associated bit field value type. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To view the metric's schema: Synatx Note You must run the ceph daemon command from the node running the daemon. Executing ceph daemon DAEMON_NAME perf schema command from the monitor node: Example Executing the ceph daemon DAEMON_NAME perf schema command from the OSD node: Example Table 9.1. The bit field value definitions Bit Meaning 1 Floating point value 2 Unsigned 64-bit integer value 4 Average (Sum + Count) 8 Counter Each value will have bit 1 or 2 set to indicate the type, either a floating point or an integer value. When bit 4 is set, there will be two values to read, a sum and a count. When bit 8 is set, the average for the interval would be the sum delta, since the read, divided by the count delta. Alternatively, dividing the values outright would provide the lifetime average value. Typically these are used to measure latencies, the number of requests and a sum of request latencies. Some bit values are combined, for example 5, 6 and 10. A bit value of 5 is a combination of bit 1 and bit 4. This means the average will be a floating point value. A bit value of 6 is a combination of bit 2 and bit 4. This means the average value will be an integer. A bit value of 10 is a combination of bit 2 and bit 8. This means the counter value will be an integer value. Additional Resources See Average count and sum section in the Red Hat Ceph Storage Administration Guide for more details. 9.3. Viewing the performance counters for users and buckets The Ceph Object Gateway uses the performance counters to track metrics. You can visualize a cluster-wide view of the usage data over time in the Ceph Exporter port, which is usually, 9926 , which includes PUT operations for objects in a bucket. To track the operation metrics by users, set the rgw_user_counters_cache to true and to track the operation metrics by buckets, set the rgw_bucket_counters_cache to true . You can use both rgw_user_counters_cache_size and rgw_bucket_counters_cache_size to set number of entries in each cache. Counters are evicted from a cache once the number of counters in the cache are greater than the cache size configuration variable. The counters that are evicted are the least recently used (LRU). For example, if the number of buckets exceeded rgw_bucket_counters_cache_size by 1 and the counters with label bucket1 were the last to be updated, the counters for bucket1 get evicted from the cache. If S3 operations tracked by the operation metrics were done on bucket1 after eviction, all the metrics in the cache for bucket1 start at 0 . Cache sizing can depend on several factors, which include the following: Number of users in the cluster. Number of buckets in the cluster. Memory usage of the Ceph Object Gateway. Disk and memory usage of Prometheus. To help calculate the Ceph Object Gateway's memory usage of a cache, it should be noted that each cache entry, encompassing all the operation metrics, is 1360 bytes. This value is an estimate and subject to change if metrics are added or removed from the operation metrics list. Important Since the operation metrics are labeled as performance counters, they live in memory. If the Ceph Object Gateway is restarted or crashes, all counters in the Ceph Object Gateway, whether in a cache or not, are lost. Prerequisites A running Red Hat Ceph Storage cluster with Ceph Object Gateway installed. Monitoring stack enabled which includes Prometheus and ceph-exporter . Procedure Set the performance counters for users and buckets. Set the performance counters for the users. Example Set the performance counters for the buckets. Example Restart the Ceph Object Gateway service. Example Create users. For more information, see User Management . Create buckets and upload objects into the bucket. Configure s3cmd . Example Create the S3 bucket. Syntax Example Create your file, input all the data, upload buckets on S3. Syntax Example Verify that the objects are uploaded. Example View the performance counter dump. Syntax Verify that the metrics are running on the local host. Syntax Example for per bucket perf counter: Example for per user perf counter: Verify that the same metrics on Prometheus. Syntax Example 9.4. Dump the Ceph performance counters The ceph daemon .. perf dump command outputs the current values and groups the metrics under the collection name for each subsystem. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To view the current metric data: Syntax Note You must run the ceph daemon command from the node running the daemon. Executing ceph daemon .. perf dump command from the Monitor node: Executing the ceph daemon .. perf dump command from the OSD node: Additional Resources To view a short description of each Monitor metric available, please see the Ceph monitor metrics table . 9.5. Average count and sum All latency numbers have a bit field value of 5. This field contains floating point values for the average count and sum. The avgcount is the number of operations within this range and the sum is the total latency in seconds. When dividing the sum by the avgcount this will provide you with an idea of the latency per operation. Additional Resources To view a short description of each OSD metric available, please see the Ceph OSD table . 9.6. Ceph Monitor metrics Cluster Metrics Table Level Database Metrics Table General Monitor Metrics Table Paxos Metrics Table Throttle Metrics Table Table 9.2. Cluster Metrics Table Collection Name Metric Name Bit Field Value Short Description cluster num_mon 2 Number of monitors num_mon_quorum 2 Number of monitors in quorum num_osd 2 Total number of OSD num_osd_up 2 Number of OSDs that are up num_osd_in 2 Number of OSDs that are in cluster osd_epoch 2 Current epoch of OSD map osd_bytes 2 Total capacity of cluster in bytes osd_bytes_used 2 Number of used bytes on cluster osd_bytes_avail 2 Number of available bytes on cluster num_pool 2 Number of pools num_pg 2 Total number of placement groups num_pg_active_clean 2 Number of placement groups in active+clean state num_pg_active 2 Number of placement groups in active state num_pg_peering 2 Number of placement groups in peering state num_object 2 Total number of objects on cluster num_object_degraded 2 Number of degraded (missing replicas) objects num_object_misplaced 2 Number of misplaced (wrong location in the cluster) objects num_object_unfound 2 Number of unfound objects num_bytes 2 Total number of bytes of all objects num_mds_up 2 Number of MDSs that are up num_mds_in 2 Number of MDS that are in cluster num_mds_failed 2 Number of failed MDS mds_epoch 2 Current epoch of MDS map Table 9.3. Level Database Metrics Table Collection Name Metric Name Bit Field Value Short Description leveldb leveldb_get 10 Gets leveldb_transaction 10 Transactions leveldb_compact 10 Compactions leveldb_compact_range 10 Compactions by range leveldb_compact_queue_merge 10 Mergings of ranges in compaction queue leveldb_compact_queue_len 2 Length of compaction queue Table 9.4. General Monitor Metrics Table Collection Name Metric Name Bit Field Value Short Description mon num_sessions 2 Current number of opened monitor sessions session_add 10 Number of created monitor sessions session_rm 10 Number of remove_session calls in monitor session_trim 10 Number of trimed monitor sessions num_elections 10 Number of elections monitor took part in election_call 10 Number of elections started by monitor election_win 10 Number of elections won by monitor election_lose 10 Number of elections lost by monitor Table 9.5. Paxos Metrics Table Collection Name Metric Name Bit Field Value Short Description paxos start_leader 10 Starts in leader role start_peon 10 Starts in peon role restart 10 Restarts refresh 10 Refreshes refresh_latency 5 Refresh latency begin 10 Started and handled begins begin_keys 6 Keys in transaction on begin begin_bytes 6 Data in transaction on begin begin_latency 5 Latency of begin operation commit 10 Commits commit_keys 6 Keys in transaction on commit commit_bytes 6 Data in transaction on commit commit_latency 5 Commit latency collect 10 Peon collects collect_keys 6 Keys in transaction on peon collect collect_bytes 6 Data in transaction on peon collect collect_latency 5 Peon collect latency collect_uncommitted 10 Uncommitted values in started and handled collects collect_timeout 10 Collect timeouts accept_timeout 10 Accept timeouts lease_ack_timeout 10 Lease acknowledgement timeouts lease_timeout 10 Lease timeouts store_state 10 Store a shared state on disk store_state_keys 6 Keys in transaction in stored state store_state_bytes 6 Data in transaction in stored state store_state_latency 5 Storing state latency share_state 10 Sharings of state share_state_keys 6 Keys in shared state share_state_bytes 6 Data in shared state new_pn 10 New proposal number queries new_pn_latency 5 New proposal number getting latency Table 9.6. Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description throttle-* val 10 Currently available throttle max 10 Max value for throttle get 10 Gets get_sum 10 Got data get_or_fail_fail 10 Get blocked during get_or_fail get_or_fail_success 10 Successful get during get_or_fail take 10 Takes take_sum 10 Taken data put 10 Puts put_sum 10 Put data wait 5 Waiting latency 9.7. Ceph OSD metrics Write Back Throttle Metrics Table Level Database Metrics Table Objecter Metrics Table Read and Write Operations Metrics Table Recovery State Metrics Table OSD Throttle Metrics Table Table 9.7. Write Back Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description WBThrottle bytes_dirtied 2 Dirty data bytes_wb 2 Written data ios_dirtied 2 Dirty operations ios_wb 2 Written operations inodes_dirtied 2 Entries waiting for write inodes_wb 2 Written entries Table 9.8. Level Database Metrics Table Collection Name Metric Name Bit Field Value Short Description leveldb leveldb_get 10 Gets leveldb_transaction 10 Transactions leveldb_compact 10 Compactions leveldb_compact_range 10 Compactions by range leveldb_compact_queue_merge 10 Mergings of ranges in compaction queue leveldb_compact_queue_len 2 Length of compaction queue Table 9.9. Objecter Metrics Table Collection Name Metric Name Bit Field Value Short Description objecter op_active 2 Active operations op_laggy 2 Laggy operations op_send 10 Sent operations op_send_bytes 10 Sent data op_resend 10 Resent operations op_ack 10 Commit callbacks op_commit 10 Operation commits op 10 Operation op_r 10 Read operations op_w 10 Write operations op_rmw 10 Read-modify-write operations op_pg 10 PG operation osdop_stat 10 Stat operations osdop_create 10 Create object operations osdop_read 10 Read operations osdop_write 10 Write operations osdop_writefull 10 Write full object operations osdop_append 10 Append operation osdop_zero 10 Set object to zero operations osdop_truncate 10 Truncate object operations osdop_delete 10 Delete object operations osdop_mapext 10 Map extent operations osdop_sparse_read 10 Sparse read operations osdop_clonerange 10 Clone range operations osdop_getxattr 10 Get xattr operations osdop_setxattr 10 Set xattr operations osdop_cmpxattr 10 Xattr comparison operations osdop_rmxattr 10 Remove xattr operations osdop_resetxattrs 10 Reset xattr operations osdop_tmap_up 10 TMAP update operations osdop_tmap_put 10 TMAP put operations osdop_tmap_get 10 TMAP get operations osdop_call 10 Call (execute) operations osdop_watch 10 Watch by object operations osdop_notify 10 Notify about object operations osdop_src_cmpxattr 10 Extended attribute comparison in multi operations osdop_other 10 Other operations linger_active 2 Active lingering operations linger_send 10 Sent lingering operations linger_resend 10 Resent lingering operations linger_ping 10 Sent pings to lingering operations poolop_active 2 Active pool operations poolop_send 10 Sent pool operations poolop_resend 10 Resent pool operations poolstat_active 2 Active get pool stat operations poolstat_send 10 Pool stat operations sent poolstat_resend 10 Resent pool stats statfs_active 2 Statfs operations statfs_send 10 Sent FS stats statfs_resend 10 Resent FS stats command_active 2 Active commands command_send 10 Sent commands command_resend 10 Resent commands map_epoch 2 OSD map epoch map_full 10 Full OSD maps received map_inc 10 Incremental OSD maps received osd_sessions 2 Open sessions osd_session_open 10 Sessions opened osd_session_close 10 Sessions closed osd_laggy 2 Laggy OSD sessions Table 9.10. Read and Write Operations Metrics Table Collection Name Metric Name Bit Field Value Short Description osd op_wip 2 Replication operations currently being processed (primary) op_in_bytes 10 Client operations total write size op_out_bytes 10 Client operations total read size op_latency 5 Latency of client operations (including queue time) op_process_latency 5 Latency of client operations (excluding queue time) op_r 10 Client read operations op_r_out_bytes 10 Client data read op_r_latency 5 Latency of read operation (including queue time) op_r_process_latency 5 Latency of read operation (excluding queue time) op_w 10 Client write operations op_w_in_bytes 10 Client data written op_w_rlat 5 Client write operation readable/applied latency op_w_latency 5 Latency of write operation (including queue time) op_w_process_latency 5 Latency of write operation (excluding queue time) op_rw 10 Client read-modify-write operations op_rw_in_bytes 10 Client read-modify-write operations write in op_rw_out_bytes 10 Client read-modify-write operations read out op_rw_rlat 5 Client read-modify-write operation readable/applied latency op_rw_latency 5 Latency of read-modify-write operation (including queue time) op_rw_process_latency 5 Latency of read-modify-write operation (excluding queue time) subop 10 Suboperations subop_in_bytes 10 Suboperations total size subop_latency 5 Suboperations latency subop_w 10 Replicated writes subop_w_in_bytes 10 Replicated written data size subop_w_latency 5 Replicated writes latency subop_pull 10 Suboperations pull requests subop_pull_latency 5 Suboperations pull latency subop_push 10 Suboperations push messages subop_push_in_bytes 10 Suboperations pushed size subop_push_latency 5 Suboperations push latency pull 10 Pull requests sent push 10 Push messages sent push_out_bytes 10 Pushed size push_in 10 Inbound push messages push_in_bytes 10 Inbound pushed size recovery_ops 10 Started recovery operations loadavg 2 CPU load buffer_bytes 2 Total allocated buffer size numpg 2 Placement groups numpg_primary 2 Placement groups for which this osd is primary numpg_replica 2 Placement groups for which this osd is replica numpg_stray 2 Placement groups ready to be deleted from this osd heartbeat_to_peers 2 Heartbeat (ping) peers we send to heartbeat_from_peers 2 Heartbeat (ping) peers we recv from map_messages 10 OSD map messages map_message_epochs 10 OSD map epochs map_message_epoch_dups 10 OSD map duplicates stat_bytes 2 OSD size stat_bytes_used 2 Used space stat_bytes_avail 2 Available space copyfrom 10 Rados 'copy-from' operations tier_promote 10 Tier promotions tier_flush 10 Tier flushes tier_flush_fail 10 Failed tier flushes tier_try_flush 10 Tier flush attempts tier_try_flush_fail 10 Failed tier flush attempts tier_evict 10 Tier evictions tier_whiteout 10 Tier whiteouts tier_dirty 10 Dirty tier flag set tier_clean 10 Dirty tier flag cleaned tier_delay 10 Tier delays (agent waiting) tier_proxy_read 10 Tier proxy reads agent_wake 10 Tiering agent wake up agent_skip 10 Objects skipped by agent agent_flush 10 Tiering agent flushes agent_evict 10 Tiering agent evictions object_ctx_cache_hit 10 Object context cache hits object_ctx_cache_total 10 Object context cache lookups ceph_cluster_osd_blocklist_count 2 Number of clients blocklisted Table 9.11. Recovery State Metrics Table Collection Name Metric Name Bit Field Value Short Description recoverystate_perf initial_latency 5 Initial recovery state latency started_latency 5 Started recovery state latency reset_latency 5 Reset recovery state latency start_latency 5 Start recovery state latency primary_latency 5 Primary recovery state latency peering_latency 5 Peering recovery state latency backfilling_latency 5 Backfilling recovery state latency waitremotebackfillreserved_latency 5 Wait remote backfill reserved recovery state latency waitlocalbackfillreserved_latency 5 Wait local backfill reserved recovery state latency notbackfilling_latency 5 Notbackfilling recovery state latency repnotrecovering_latency 5 Repnotrecovering recovery state latency repwaitrecoveryreserved_latency 5 Rep wait recovery reserved recovery state latency repwaitbackfillreserved_latency 5 Rep wait backfill reserved recovery state latency RepRecovering_latency 5 RepRecovering recovery state latency activating_latency 5 Activating recovery state latency waitlocalrecoveryreserved_latency 5 Wait local recovery reserved recovery state latency waitremoterecoveryreserved_latency 5 Wait remote recovery reserved recovery state latency recovering_latency 5 Recovering recovery state latency recovered_latency 5 Recovered recovery state latency clean_latency 5 Clean recovery state latency active_latency 5 Active recovery state latency replicaactive_latency 5 Replicaactive recovery state latency stray_latency 5 Stray recovery state latency getinfo_latency 5 Getinfo recovery state latency getlog_latency 5 Getlog recovery state latency waitactingchange_latency 5 Waitactingchange recovery state latency incomplete_latency 5 Incomplete recovery state latency getmissing_latency 5 Getmissing recovery state latency waitupthru_latency 5 Waitupthru recovery state latency Table 9.12. OSD Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description throttle-* val 10 Currently available throttle max 10 Max value for throttle get 10 Gets get_sum 10 Got data get_or_fail_fail 10 Get blocked during get_or_fail get_or_fail_success 10 Successful get during get_or_fail take 10 Takes take_sum 10 Taken data put 10 Puts put_sum 10 Put data wait 5 Waiting latency 9.8. Ceph Object Gateway metrics Ceph Object Gateway Client Table Objecter Metrics Table Ceph Object Gateway Throttle Metrics Table Table 9.13. Ceph Object Gateway Client Metrics Table Collection Name Metric Name Bit Field Value Short Description client.rgw.<rgw_node_name> req 10 Requests failed_req 10 Aborted requests copy_obj_ops 10 Copy objects copy_obj_bytes 10 Size of copy objects copy_obj_lat 10 Copy object latency del_obj_ops 10 Delete objects del_obj_bytes 10 Size of delete objects del_obj_lat 10 Delete object latency del_bucket_ops 10 Delete Buckets del_bucket_lat 10 Delete bucket latency get 10 Gets get_b 10 Size of gets get_initial_lat 5 Get latency list_obj_ops 10 List objects list_obj_lat 10 List object latency list_buckets_ops 10 List buckets list_buckets_lat 10 List buckets latency put 10 Puts put_b 10 Size of puts put_initial_lat 5 Put latency qlen 2 Queue length qactive 2 Active requests queue cache_hit 10 Cache hits cache_miss 10 Cache miss keystone_token_cache_hit 10 Keystone token cache hits keystone_token_cache_miss 10 Keystone token cache miss Table 9.14. Objecter Metrics Table Collection Name Metric Name Bit Field Value Short Description objecter op_active 2 Active operations op_laggy 2 Laggy operations op_send 10 Sent operations op_send_bytes 10 Sent data op_resend 10 Resent operations op_ack 10 Commit callbacks op_commit 10 Operation commits op 10 Operation op_r 10 Read operations op_w 10 Write operations op_rmw 10 Read-modify-write operations op_pg 10 PG operation osdop_stat 10 Stat operations osdop_create 10 Create object operations osdop_read 10 Read operations osdop_write 10 Write operations osdop_writefull 10 Write full object operations osdop_append 10 Append operation osdop_zero 10 Set object to zero operations osdop_truncate 10 Truncate object operations osdop_delete 10 Delete object operations osdop_mapext 10 Map extent operations osdop_sparse_read 10 Sparse read operations osdop_clonerange 10 Clone range operations osdop_getxattr 10 Get xattr operations osdop_setxattr 10 Set xattr operations osdop_cmpxattr 10 Xattr comparison operations osdop_rmxattr 10 Remove xattr operations osdop_resetxattrs 10 Reset xattr operations osdop_tmap_up 10 TMAP update operations osdop_tmap_put 10 TMAP put operations osdop_tmap_get 10 TMAP get operations osdop_call 10 Call (execute) operations osdop_watch 10 Watch by object operations osdop_notify 10 Notify about object operations osdop_src_cmpxattr 10 Extended attribute comparison in multi operations osdop_other 10 Other operations linger_active 2 Active lingering operations linger_send 10 Sent lingering operations linger_resend 10 Resent lingering operations linger_ping 10 Sent pings to lingering operations poolop_active 2 Active pool operations poolop_send 10 Sent pool operations poolop_resend 10 Resent pool operations poolstat_active 2 Active get pool stat operations poolstat_send 10 Pool stat operations sent poolstat_resend 10 Resent pool stats statfs_active 2 Statfs operations statfs_send 10 Sent FS stats statfs_resend 10 Resent FS stats command_active 2 Active commands command_send 10 Sent commands command_resend 10 Resent commands map_epoch 2 OSD map epoch map_full 10 Full OSD maps received map_inc 10 Incremental OSD maps received osd_sessions 2 Open sessions osd_session_open 10 Sessions opened osd_session_close 10 Sessions closed osd_laggy 2 Laggy OSD sessions Table 9.15. Ceph Object Gateway Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description throttle-* val 10 Currently available throttle max 10 Max value for throttle get 10 Gets get_sum 10 Got data get_or_fail_fail 10 Get blocked during get_or_fail get_or_fail_success 10 Successful get during get_or_fail take 10 Takes take_sum 10 Taken data put 10 Puts put_sum 10 Put data wait 5 Waiting latency
|
[
"ceph daemon DAEMON_NAME perf schema",
"ceph daemon mon.host01 perf schema",
"ceph daemon osd.11 perf schema",
"ceph config set client.rgw.rgw.1.host05 rgw_user_counters_cache true",
"ceph config set client.rgw.rgw.1.host05 rgw_bucket_counters_cache true",
"ceph orch restart rgw.rgw.1",
"s3cmd --configure",
"s3cmd mb s3:// NAME_OF_THE_BUCKET_FOR_S3",
"s3cmd mb s3://bucket Bucket 's3://bucket/' created",
"s3cmd put FILE_NAME s3:// NAME_OF_THE_BUCKET_ON_S3",
"s3cmd put test.txt s3://bucket upload: 'test.txt' -> 's3://bucket/test.txt' [1 of 1] 21 of 21 100% in 1s 16.75 B/s done",
"s3cmd ls s3://bucket",
"config dump ceph daemon DAEMON_ID counter dump",
"http:// RGW_IP_ADDRESS : CEPH-EXPORTER_PORT /",
"HELP ceph_rgw_op_per_bucket_put_obj_ops Puts TYPE ceph_rgw_op_per_bucket_put_obj_ops counter ceph_rgw_op_per_bucket_put_obj_ops{bucket=\"test-bkt1\",instance_id=\"ceph-ck-perf-ej61qj-node5\"} 10",
"HELP ceph_rgw_op_per_user_put_obj_ops Puts TYPE ceph_rgw_op_per_user_put_obj_ops counter ceph_rgw_op_per_user_put_obj_ops{instance_id=\"ceph-ck-perf-ej61qj-node5\",user=\"ckulal\"} 10",
"http:// RGW_IP_ADDRESS : PROMETHEUS_PORT /",
"https://10.0.210.100:9283/",
"ceph daemon DAEMON_NAME perf dump",
"ceph daemon mon.host01 perf dump",
"ceph daemon osd.11 perf dump"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/administration_guide/ceph-performance-counters
|
18.4. Audit Retention
|
18.4. Audit Retention Audit data are required to be retained in a way according to their retention categories: Extended Audit Retention: Audit data that is retained for necessary maintenance for a certificate's lifetime (from issuance to its expiration or revocation date). In Certificate System, they appear in the following areas: Signed audit logs: All events defined in Appendix E. Audit Events of Red Hat Certificate System's Administration Guide. In the CA's internal LDAP server, certificate request records received by the CA and the certificate records as the requests are approved. Normal Audit Retention: Audit data that is typically retained only to support normal operation. This includes all events that do not fall under the extended audit retention category. Note Certificate System does not provide any interface to modify or delete audit data. 18.4.1. Location of Audit Data This section explains where Certificate System stores audit data and where to find the expiration date which plays a crucial role to determine the retention category. 18.4.1.1. Location of Audit Logs Certificate System stores audit logs in the /var/log/pki- name /logs/signedAudit/ directory. For example, the audit logs of a CA are stored in the /var/lib/pki/ instance_name /ca/logs/signedAudit/ directory. Normal users cannot access files in this directory. See For a list of audit log events that need to follow the extended audit retention period, see the Audit events appendix in the Red Hat Certificate System Administration Guide . Important Do not delete any audit logs that contain any events listed in the "Extended Audit Events" appendix for certificate requests or certificates that have not yet expired. These audit logs will consume storage space potentially up to all space available in the disk partition. 18.4.1.2. Location of Certificate Requests and Certificate Records When certificate signing requests (CSR) are submitted, the CA stores the CSRs in the request repository provided by the CA's internal directory server. When these requests are approved, each certificate issued successfully, will result in an LDAP record being created in the certificate repository by the same internal directory server. The CA's internal directory server was specified in the following parameters when the CA was created using the pkispawn utility: pki_ds_hostname pki_ds_ldap_port pki_ds_database pki_ds_base_dn If a certificate request has been approved successfully, the validity of the certificate can be viewed by accessing the CA EE portal either by request ID or by serial number. To display the validity for a certificate request record: Log into the CA EE portal under https:// host_name : port /ca/ee/ca/ . Click Check Request Status . Enter the Request Identifier. Click Issued Certificate . Search for Validity . To display the validity from a certificate record: Log into the CA EE portal under https:// host_name : port /ca/ee/ca/ . Enter the serial number range. If you search for one specific record, enter the record's serial number in both the lowest and highest serial number field. Click on the search result. Search for Validity . Important Do not delete the request of the certificate records of the certificates that have not yet expired.
| null |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/audit_retention
|
Chapter 7. Working with Helm charts
|
Chapter 7. Working with Helm charts 7.1. Understanding Helm Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Container Platform resources. Creating a chart in a cluster creates a running instance of the chart known as a release . Each time a chart is created, or a release is upgraded or rolled back, an incremental revision is created. 7.1.1. Key features Helm provides the ability to: Search through a large collection of charts stored in the chart repository. Modify existing charts. Create your own charts with OpenShift Container Platform or Kubernetes resources. Package and share your applications as charts. 7.1.2. Red Hat Certification of Helm charts for OpenShift You can choose to verify and certify your Helm charts by Red Hat for all the components you will be deploying on the Red Hat OpenShift Container Platform. Charts go through an automated Red Hat OpenShift certification workflow that guarantees security compliance as well as best integration and experience with the platform. Certification assures the integrity of the chart and ensures that the Helm chart works seamlessly on Red Hat OpenShift clusters. 7.1.3. Additional resources For more information on how to certify your Helm charts as a Red Hat partner, see Red Hat Certification of Helm charts for OpenShift . For more information on OpenShift and Container certification guides for Red Hat partners, see Partner Guide for OpenShift and Container Certification . For a list of the charts, see the Red Hat Helm index file . You can view the available charts at the Red Hat Marketplace . For more information, see Using the Red Hat Marketplace . 7.2. Installing Helm The following section describes how to install Helm on different platforms using the CLI. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Prerequisites You have installed Go, version 1.13 or higher. 7.2.1. On Linux Download the Helm binary and add it to your path: Linux (x86_64, amd64) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm Linux on IBM Z(R) and IBM(R) LinuxONE (s390x) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm Linux on IBM Power(R) (ppc64le) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 7.2.2. On Windows 7/8 Download the latest .exe file and put in a directory of your preference. Right click Start and click Control Panel . Select System and Security and then click System . From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom. Select Path from the Variable section and click Edit . Click New and type the path to the folder with the .exe file into the field or click Browse and select the directory, and click OK . 7.2.3. On Windows 10 Download the latest .exe file and put in a directory of your preference. Click Search and type env or environment . Select Edit environment variables for your account . Select Path from the Variable section and click Edit . Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK . 7.2.4. On MacOS Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 7.3. Configuring custom Helm chart repositories You can create Helm releases on an OpenShift Container Platform cluster using the following methods: The CLI. The Developer perspective of the web console. The Developer Catalog , in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see the Red Hat Helm index file . As a cluster administrator, you can add multiple cluster-scoped and namespace-scoped Helm chart repositories, separate from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . As a regular user or project member with the appropriate role-based access control (RBAC) permissions, you can add multiple namespace-scoped Helm chart repositories, apart from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . In the Developer perspective of the web console, you can use the Helm page to: Create Helm Releases and Repositories using the Create button. Create, update, or delete a cluster-scoped or namespace-scoped Helm chart repository. View the list of the existing Helm chart repositories in the Repositories tab, which can also be easily distinguished as either cluster scoped or namespace scoped. 7.3.1. Installing a Helm chart on an OpenShift Container Platform cluster Prerequisites You have a running OpenShift Container Platform cluster and you have logged into it. You have installed Helm. Procedure Create a new project: USD oc new-project vault Add a repository of Helm charts to your local Helm client: USD helm repo add openshift-helm-charts https://charts.openshift.io/ Example output "openshift-helm-charts" has been added to your repositories Update the repository: USD helm repo update Install an example HashiCorp Vault: USD helm install example-vault openshift-helm-charts/hashicorp-vault Example output NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault! Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2 7.3.2. Creating Helm releases using the Developer perspective You can use either the Developer perspective in the web console or the CLI to select and create a release from the Helm charts listed in the Developer Catalog . You can create Helm releases by installing Helm charts and see them in the Developer perspective of the web console. Prerequisites You have logged in to the web console and have switched to the Developer perspective . Procedure To create Helm releases from the Helm charts provided in the Developer Catalog : In the Developer perspective, navigate to the +Add view and select a project. Then click Helm Chart option to see all the Helm Charts in the Developer Catalog . Select a chart and read the description, README, and other details about the chart. Click Create . Figure 7.1. Helm charts in developer catalog In the Create Helm Release page: Enter a unique name for the release in the Release Name field. Select the required chart version from the Chart Version drop-down list. Configure your Helm chart by using the Form View or the YAML View . Note Where available, you can switch between the YAML View and Form View . The data is persisted when switching between the views. Click Create to create a Helm release. The web console displays the new release in the Topology view. If a Helm chart has release notes, the web console displays them. If a Helm chart creates workloads, the web console displays them on the Topology or Helm release details page. The workloads are DaemonSet , CronJob , Pod , Deployment , and DeploymentConfig . View the newly created Helm release in the Helm Releases page. You can upgrade, rollback, or delete a Helm release by using the Actions button on the side panel or by right-clicking a Helm release. 7.3.3. Using Helm in the web terminal You can use Helm by Accessing the web terminal in the Developer perspective of the web console. 7.3.4. Creating a custom Helm chart on OpenShift Container Platform Procedure Create a new project: USD oc new-project nodejs-ex-k Download an example Node.js chart that contains OpenShift Container Platform objects: USD git clone https://github.com/redhat-developer/redhat-helm-charts Go to the directory with the sample chart: USD cd redhat-helm-charts/alpha/nodejs-ex-k/ Edit the Chart.yaml file and add a description of your chart: apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5 1 The chart API version. It should be v2 for Helm charts that require at least Helm 3. 2 The name of your chart. 3 The description of your chart. 4 The URL to an image to be used as an icon. 5 The Version of your chart as per the Semantic Versioning (SemVer) 2.0.0 Specification. Verify that the chart is formatted properly: USD helm lint Example output [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed Navigate to the directory level: USD cd .. Install the chart: USD helm install nodejs-chart nodejs-ex-k Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0 7.3.5. Adding custom Helm chart repositories As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster. Sample Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository, run: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 7.2. Chart repositories filter Note If a cluster administrator removes all of the chart repositories, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel. 7.3.6. Adding namespace-scoped custom Helm chart repositories The cluster-scoped HelmChartRepository custom resource definition (CRD) for Helm repository provides the ability for administrators to add Helm repositories as custom resources. The namespace-scoped ProjectHelmChartRepository CRD allows project members with the appropriate role-based access control (RBAC) permissions to create Helm repository resources of their choice but scoped to their namespace. Such project members can see charts from both cluster-scoped and namespace-scoped Helm repository resources. Note Administrators can limit users from creating namespace-scoped Helm repository resources. By limiting users, administrators have the flexibility to control the RBAC through a namespace role instead of a cluster role. This avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications. The addition of the namespace-scoped Helm repository does not impact the behavior of the existing cluster-scoped Helm repository. As a regular user or project member with the appropriate RBAC permissions, you can add custom namespace-scoped Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new namespace-scoped Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your namespace. Sample Namespace-scoped Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository scoped to your my-namespace namespace, run: USD cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF The output verifies that the namespace-scoped Helm Chart Repository CR is created: Example output Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed in your my-namespace namespace. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 7.3. Chart repositories filter in your namespace Alternatively, run: USD oc get projecthelmchartrepositories --namespace my-namespace Example output Note If a cluster administrator or a regular user with appropriate RBAC permissions removes all of the chart repositories in a specific namespace, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel for that specific namespace. 7.3.7. Creating credentials and CA certificates to add Helm chart repositories Some Helm chart repositories need credentials and custom certificate authority (CA) certificates to connect to it. You can use the web console as well as the CLI to add credentials and certificates. Procedure To configure the credentials and certificates, and then add a Helm chart repository using the CLI: In the openshift-config namespace, create a ConfigMap object with a custom CA certificate in PEM encoded format, and store it under the ca-bundle.crt key within the config map: USD oc create configmap helm-ca-cert \ --from-file=ca-bundle.crt=/path/to/certs/ca.crt \ -n openshift-config In the openshift-config namespace, create a Secret object to add the client TLS configurations: USD oc create secret tls helm-tls-configs \ --cert=/path/to/certs/client.crt \ --key=/path/to/certs/client.key \ -n openshift-config Note that the client certificate and key must be in PEM encoded format and stored under the keys tls.crt and tls.key , respectively. Add the Helm repository as follows: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF The ConfigMap and Secret are consumed in the HelmChartRepository CR using the tlsConfig and ca fields. These certificates are used to connect to the Helm repository URL. By default, all authenticated users have access to all configured charts. However, for chart repositories where certificates are needed, you must provide users with read access to the helm-ca-cert config map and helm-tls-configs secret in the openshift-config namespace, as follows: USD cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["helm-ca-cert"] verbs: ["get"] - apiGroups: [""] resources: ["secrets"] resourceNames: ["helm-tls-configs"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF 7.3.8. Filtering Helm Charts by their certification level You can filter Helm charts based on their certification level in the Developer Catalog . Procedure In the Developer perspective, navigate to the +Add view and select a project. From the Developer Catalog tile, select the Helm Chart option to see all the Helm charts in the Developer Catalog . Use the filters to the left of the list of Helm charts to filter the required charts: Use the Chart Repositories filter to filter charts provided by Red Hat Certification Charts or OpenShift Helm Charts . Use the Source filter to filter charts sourced from Partners , Community , or Red Hat . Certified charts are indicated with the ( ) icon. Note The Source filter will not be visible when there is only one provider type. You can now select the required chart and install it. 7.3.9. Disabling Helm Chart repositories You can disable Helm Charts from a particular Helm Chart Repository in the catalog by setting the disabled property in the HelmChartRepository custom resource to true . Procedure To disable a Helm Chart repository by using CLI, add the disabled: true flag to the custom resource. For example, to remove an Azure sample chart repository, run: To disable a recently added Helm Chart repository by using Web Console: Go to Custom Resource Definitions and search for the HelmChartRepository custom resource. Go to Instances , find the repository you want to disable, and click its name. Go to the YAML tab, add the disabled: true flag in the spec section, and click Save . Example The repository is now disabled and will not appear in the catalog. 7.4. Working with Helm releases You can use the Developer perspective in the web console to update, rollback, or delete a Helm release. 7.4.1. Prerequisites You have logged in to the web console and have switched to the Developer perspective . 7.4.2. Upgrading a Helm release You can upgrade a Helm release to upgrade to a new chart version or update your release configuration. Procedure In the Topology view, select the Helm release to see the side panel. Click Actions Upgrade Helm Release . In the Upgrade Helm Release page, select the Chart Version you want to upgrade to, and then click Upgrade to create another Helm release. The Helm Releases page displays the two revisions. 7.4.3. Rolling back a Helm release If a release fails, you can rollback the Helm release to a version. Procedure To rollback a release using the Helm view: In the Developer perspective, navigate to the Helm view to see the Helm Releases in the namespace. Click the Options menu adjoining the listed release, and select Rollback . In the Rollback Helm Release page, select the Revision you want to rollback to and click Rollback . In the Helm Releases page, click on the chart to see the details and resources for that release. Go to the Revision History tab to see all the revisions for the chart. Figure 7.4. Helm revision history If required, you can further use the Options menu adjoining a particular revision and select the revision to rollback to. 7.4.4. Deleting a Helm release Procedure In the Topology view, right-click the Helm release and select Delete Helm Release . In the confirmation prompt, enter the name of the chart and click Delete .
|
[
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"projecthelmchartrepository.helm.openshift.io/azure-sample-repo created",
"oc get projecthelmchartrepositories --namespace my-namespace",
"NAME AGE azure-sample-repo 1m",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/working-with-helm-charts
|
Chapter 49. RbacService
|
Chapter 49. RbacService 49.1. ListRoleBindings GET /v1/rbac/bindings 49.1.1. Description 49.1.2. Parameters 49.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 49.1.3. Return Type V1ListRoleBindingsResponse 49.1.4. Content Type application/json 49.1.5. Responses Table 49.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListRoleBindingsResponse 0 An unexpected error response. RuntimeError 49.1.6. Samples 49.1.7. Common object reference 49.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 49.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 49.1.7.3. StorageK8sRoleBinding Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean ClusterRole specifies whether the binding binds a cluster role. However, it cannot be used to determine whether the binding is a cluster role binding. This can be done in conjunction with the namespace. If the namespace is empty and cluster role is true, the binding is a cluster role binding. labels Map of string annotations Map of string createdAt Date date-time subjects List of StorageSubject roleId String 49.1.7.4. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.1.7.5. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.1.7.6. V1ListRoleBindingsResponse Field Name Required Nullable Type Description Format bindings List of StorageK8sRoleBinding 49.2. GetRoleBinding GET /v1/rbac/bindings/{id} 49.2.1. Description 49.2.2. Parameters 49.2.2.1. Path Parameters Name Description Required Default Pattern id X null 49.2.3. Return Type V1GetRoleBindingResponse 49.2.4. Content Type application/json 49.2.5. Responses Table 49.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetRoleBindingResponse 0 An unexpected error response. RuntimeError 49.2.6. Samples 49.2.7. Common object reference 49.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 49.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 49.2.7.3. StorageK8sRoleBinding Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean ClusterRole specifies whether the binding binds a cluster role. However, it cannot be used to determine whether the binding is a cluster role binding. This can be done in conjunction with the namespace. If the namespace is empty and cluster role is true, the binding is a cluster role binding. labels Map of string annotations Map of string createdAt Date date-time subjects List of StorageSubject roleId String 49.2.7.4. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.2.7.5. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.2.7.6. V1GetRoleBindingResponse Field Name Required Nullable Type Description Format binding StorageK8sRoleBinding 49.3. ListRoles GET /v1/rbac/roles 49.3.1. Description 49.3.2. Parameters 49.3.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 49.3.3. Return Type V1ListRolesResponse 49.3.4. Content Type application/json 49.3.5. Responses Table 49.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListRolesResponse 0 An unexpected error response. RuntimeError 49.3.6. Samples 49.3.7. Common object reference 49.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 49.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 49.3.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.3.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.3.7.5. V1ListRolesResponse Field Name Required Nullable Type Description Format roles List of StorageK8sRole 49.4. GetRole GET /v1/rbac/roles/{id} 49.4.1. Description 49.4.2. Parameters 49.4.2.1. Path Parameters Name Description Required Default Pattern id X null 49.4.3. Return Type V1GetRoleResponse 49.4.4. Content Type application/json 49.4.5. Responses Table 49.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetRoleResponse 0 An unexpected error response. RuntimeError 49.4.6. Samples 49.4.7. Common object reference 49.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 49.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 49.4.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.4.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.4.7.5. V1GetRoleResponse Field Name Required Nullable Type Description Format role StorageK8sRole 49.5. GetSubject GET /v1/rbac/subject/{id} Subjects served from this API are Groups and Users only. Id in this case is the Name field, since for users and groups, that is unique, and subjects do not have IDs. 49.5.1. Description 49.5.2. Parameters 49.5.2.1. Path Parameters Name Description Required Default Pattern id X null 49.5.3. Return Type V1GetSubjectResponse 49.5.4. Content Type application/json 49.5.5. Responses Table 49.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetSubjectResponse 0 An unexpected error response. RuntimeError 49.5.6. Samples 49.5.7. Common object reference 49.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 49.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 49.5.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.5.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.5.7.5. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.5.7.6. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.5.7.7. V1GetSubjectResponse Field Name Required Nullable Type Description Format subject StorageSubject clusterRoles List of StorageK8sRole scopedRoles List of V1ScopedRoles 49.5.7.8. V1ScopedRoles Field Name Required Nullable Type Description Format namespace String roles List of StorageK8sRole 49.6. ListSubjects GET /v1/rbac/subjects 49.6.1. Description 49.6.2. Parameters 49.6.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 49.6.3. Return Type V1ListSubjectsResponse 49.6.4. Content Type application/json 49.6.5. Responses Table 49.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListSubjectsResponse 0 An unexpected error response. RuntimeError 49.6.6. Samples 49.6.7. Common object reference 49.6.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.6.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 49.6.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 49.6.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.6.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.6.7.5. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.6.7.6. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.6.7.7. V1ListSubjectsResponse Field Name Required Nullable Type Description Format subjectAndRoles List of V1SubjectAndRoles 49.6.7.8. V1SubjectAndRoles Field Name Required Nullable Type Description Format subject StorageSubject roles List of StorageK8sRole
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s RoleBinding or ClusterRoleBinding. ////////////////////////////////////////",
"Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////",
"A list of k8s role bindings (free of scoped information) Next Tag: 2",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s RoleBinding or ClusterRoleBinding. ////////////////////////////////////////",
"Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////",
"Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////",
"A list of k8s roles (free of scoped information) Next Tag: 2",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////",
"Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////",
"Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////",
"Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////",
"Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////",
"Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////",
"A list of k8s subjects (users and groups only, for service accounts, try the service account service) Next Tag: 2"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/rbacservice
|
Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1]
|
Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) Groups is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 3.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 3.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a LocalSubjectAccessReview Table 3.2. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 3.3. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/localsubjectaccessreview-authorization-openshift-io-v1
|
Chapter 10. Hardware Enablement
|
Chapter 10. Hardware Enablement Hardware utility tools now correctly identify recently released hardware Prior to this update, obsolete ID files caused that recently released hardware connected to a computer was reported as unknown. To fix this bug, PCI, USB, and vendor device identification files have been updated. As a result, hardware utility tools now correctly identify recently released hardware. (BZ#1386133) New Wacom driver introduced in 7.4 to support upcoming tablets With this update, a new Wacom driver was introduced to support recently-released and upcoming tablets meanwhile the current driver continues supporting previously-released tablets. Notable features: Wacom 27QHT (touch) is now supported ExpressKey Remote (BZ#1385026) Wacom kernel driver now supports ThinkPad X1 Yoga touch screen With this update, the support for the ThinkPad X1 Yoga touch screen has been added to the Wacom kernel driver. As a result, the touch screen can be properly used when running Red Hat Enterprise Linux 7 on these machines. (BZ#1388646) The touch functionality has been added to the Wacom Cintiq 27 QHDT tablets This update adds support of the touch functionality for the Wacom Cintiq 27 QHDT tablets, which makes it possible to properly use the touch screen when running Red Hat Enterprise Linux 7 on these machines. (BZ#1391668) AMDGPU now supports the Southern Islands , Sea Islands , Volcanic Islands and Arctic Islands chipsets The support for the Southern Islands , Sea Islands , Volcanic Islands and Arctic Islands chipsets has been added. The AMDGPU graphics driver is the generation family of open source graphics drivers for the latest AMD/ATI Radeon graphics cards. It is based on the Southern Islands , Sea Islands , Volcanic Islands , and Arctic Islands chipsets. It is necessary to install the proper firmware or microcode for the card provided by the linux-firmware package. (BZ#1385757) Support added for the AMD mobile graphics Support for the AMD mobile graphics based on the Polaris architecture has been added. The Polaris architecture is based on the Arctic Islands chipsets. It is necessary to install the proper firmware or microcode for the card provided by the linux-firmware package. (BZ#1339127) Netronome NFP devices are supported With this update, the nfp driver has been added to the Linux kernel. As a result, the Netronome Network Flow Processor (Netronome NFP 4000/6000 VF) devices are now supported on Red Hat Enterprise Linux 7. (BZ#1377767) nvme-cli rebased to version 1.3 The nvme-cli utility has been updated to version 1.3, which includes support for Nonvolatile Memory Express (NVMe). With the support for NVMe, you can find the targets over Remote Direct Memory Access (RDMA) and connect to these targets. (BZ#1382119) The queued spinlocks have been implemented into the Linux kernel This update has changed the spinlock implementation in the kernel from ticket spinlocks to queued spinlocks on AMD64 and Intel 64 architectures. The queued spinclocks are more scalable than the ticket spinlocks. As a result, the system performance has been improved especially on Symmetric Multi Processing (SMP) systems with large number of CPUs. The performance now increases more linearly with increasing number of the CPUs. Note that because of this change in the spinlock implementation, kernel modules built on Red Hat Enterprise Linux 7 might not be loadable on kernels from earlier releases. Kernel modules released in Red Hat Enterprise Linux (RHEL) versions earlier than 7.4 are loadable on the kernel released in RHEL 7.4. (BZ#1241990) rapl now supports Intel Xeon v2 servers The Intel rapl driver has been updated to support Intel Xeon v2 servers. (BZ#1379590) Further support for Intel Platform Controller Hub [PCH] devices The kernel has been updated to enable support for new Intel PCH hardware on the Intel Xeon Processor E3 v6 Family CPUs. (BZ#1391219) Included genwqe-tools to enable use of hardware accelerated zLib on IBM Power and s390x The genwqe-tools package allows users of IBM Power and s390x hardware to utilize FPGA based PCIe cards for zLib compression and decompression processes. These tools enable use of RFC1950, RFC1951 and RFC1952 compliant hardware to increase performance. (BZ#1275663) librtas rebased to version 2.0.1 The librtas packages have been upgraded to upstream version 2.0.1, which provides a number of bug fixes and enhancements over the version. Notably, this update changes the soname of the provided libraries: librtas.so.1 changes to librtas.so.2 , and librtasevent.so.1 changes to librtasevent.so.2 . (BZ#1380656) The NFP driver The Network Flow Processor (NFP) driver has been backported from version 4.11 of the Linux kernel. This driver supports Netronome NFP4000 and NFP6000 based cards working as an advanced Ethernet NIC. The driver works with both SR-IOV physical and virtual functions. (BZ#1406197) Enable latest NVIDIA cards in Nouveau This update includes enablement code to ensure that higher-end NVIDIA cards based on the Pascal platform work correctly. (BZ#1330457) Support for Wacom ExpressKey Remote The Wacom ExpressKey Remote (EKR) is now supported in Red Hat Enterprise Linux 7. EKR is an external device that allows you to access shortcuts, menus and commands. (BZ#1346348) Wacom Cintiq 27 QHD now supports ExpressKey Remote With this update, Wacom Cintiq 27 QHD tablets support ExpressKey Remote (EKR). EKR is an external device that allows you to access shortcuts, menus and commands. (BZ#1342990)
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_hardware_enablement
|
Building applications
|
Building applications OpenShift Container Platform 4.17 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc project <project_name> 1",
"oc status",
"oc delete project <project_name> 1",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].",
"oc create -f <filename>",
"oc create -f <filename> -n <project>",
"kind: \"ImageStream\" apiVersion: \"image.openshift.io/v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"",
"oc process -f <filename> -l name=otherLabel",
"oc process --parameters -f <filename>",
"oc process --parameters -n <project> <template_name>",
"oc process --parameters -n openshift rails-postgresql-example",
"NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB",
"oc process -f <filename>",
"oc process <template_name>",
"oc process -f <filename> | oc create -f -",
"oc process <template> | oc create -f -",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -",
"cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql --param-file=postgres.env",
"sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-",
"oc edit template <template>",
"oc get templates -n openshift",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10",
"kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2",
"parameters: - name: USERNAME description: \"The user name for Joe\" value: joe",
"parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"",
"parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"",
"{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10",
"kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath",
"{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }",
"\"template.alpha.openshift.io/wait-for-ready\": \"true\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:",
"oc get -o yaml all > <yaml_filename>",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test",
"sudo yum install -y postgresql postgresql-server postgresql-devel",
"sudo postgresql-setup initdb",
"sudo systemctl start postgresql.service",
"sudo -u postgres createuser -s rails",
"gem install rails",
"Successfully installed rails-4.3.0 1 gem installed",
"rails new rails-app --database=postgresql",
"cd rails-app",
"gem 'pg'",
"bundle install",
"default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>",
"rake db:create",
"rails generate controller welcome index",
"root 'welcome#index'",
"rails server",
"<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>",
"ls -1",
"app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor",
"git init",
"git add .",
"git commit -m \"initial commit\"",
"git remote add origin [email protected]:<namespace/repository-name>.git",
"git push",
"oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"",
"oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password",
"-e POSTGRESQL_ADMIN_PASSWORD=admin_pw",
"oc get pods --watch",
"oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql",
"oc get dc rails-app -o json",
"env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],",
"oc logs -f build/rails-app-1",
"oc get pods",
"oc rsh <frontend_pod_id>",
"RAILS_ENV=production bundle exec rake db:migrate",
"oc expose service rails-app --hostname=www.example.com",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"projecthelmchartrepository.helm.openshift.io/azure-sample-repo created",
"oc get projecthelmchartrepositories --namespace my-namespace",
"NAME AGE azure-sample-repo 1m",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA",
"oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true",
"oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true",
"oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b",
"apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6",
"apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9",
"oc create -f <file> [-n <project_name>]",
"oc create -f core-object-counts.yaml -n demoproject",
"oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1",
"oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4",
"resourcequota \"test\" created",
"oc describe quota test",
"Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4",
"oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'",
"openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0",
"apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1",
"oc create -f gpu-quota.yaml",
"resourcequota/gpu-quota created",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1",
"apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1",
"oc create -f gpu-pod.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1",
"oc create -f gpu-pod.yaml",
"Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1",
"oc get quota -n demoproject",
"NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10",
"oc describe quota core-object-counts -n demoproject",
"Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f template.yaml -n openshift-config",
"oc get templates -n openshift-config",
"oc edit template <project_request_template> -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request",
"oc new-project <project_name>",
"oc get resourcequotas",
"oc describe resourcequotas <resource_quota_name>",
"oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"",
"oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend",
"oc describe AppliedClusterResourceQuota",
"Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8",
"kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19",
"oc create -f <file-name>.yaml",
"oc describe pod my-application",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container",
"oc describe pod pod1",
". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms",
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"",
"oc create -f <filename>.yaml",
"kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner",
"oc adm prune images [<options>]",
"oc rollout restart deployment/image-registry -n openshift-image-registry",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m",
"oc adm prune images --prune-over-size-limit",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm",
"oc adm prune images --prune-over-size-limit --confirm",
"oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'",
"myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1",
"error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client",
"error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]",
"error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge",
"service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)",
"oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'",
"time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'",
"Deleted 13374 blobs Freed up 2.835 GiB of disk space",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge",
"oc idle <service>",
"oc idle --resource-names-file <filename>",
"oc scale --replicas=1 dc <dc_name>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/building_applications/index
|
Chapter 6. Updating Drivers During Installation on Intel and AMD Systems
|
Chapter 6. Updating Drivers During Installation on Intel and AMD Systems In most cases, Red Hat Enterprise Linux already includes drivers for the devices that make up your system. However, if your system contains hardware that has been released very recently, drivers for this hardware might not yet be included. Sometimes, a driver update that provides support for a new device might be available from Red Hat or your hardware vendor on a driver disc that contains rpm packages . Typically, the driver disc is available for download as an ISO image file . Often, you do not need the new hardware during the installation process. For example, if you use a DVD to install to a local hard drive, the installation will succeed even if drivers for your network card are not available. In situations like this, complete the installation and add support for the piece of hardware afterward - refer to Section 35.1.1, "Driver Update rpm Packages" for details of adding this support. In other situations, you might want to add drivers for a device during the installation process to support a particular configuration. For example, you might want to install drivers for a network device or a storage adapter card to give the installer access to the storage devices that your system uses. You can use a driver disc to add this support during installation in one of two ways: place the ISO image file of the driver disc in a location accessible to the installer: on a local hard drive a USB flash drive create a driver disc by extracting the image file onto: a CD a DVD Refer to the instructions for making installation discs in Section 2.1, "Making an Installation DVD" for more information on burning ISO image files to CD or DVD. If Red Hat, your hardware vendor, or a trusted third party told you that you will require a driver update during the installation process, choose a method to supply the update from the methods described in this chapter and test it before beginning the installation. Conversely, do not perform a driver update during installation unless you are certain that your system requires it. Although installing an unnecessary driver update will not cause harm, the presence of a driver on a system for which it was not intended can complicate support. 6.1. Limitations of Driver Updates During Installation Unfortunately, some situations persist in which you cannot use a driver update to provide drivers during installation: Devices already in use You cannot use a driver update to replace drivers that the installation program has already loaded. Instead, you must complete the installation with the drivers that the installation program loaded and update to the new drivers after installation, or, if you need the new drivers for the installation process, consider performing an initial RAM disk driver update - refer to Section 6.2.3, "Preparing an Initial RAM Disk Update" . Devices with an equivalent device available Because all devices of the same type are initialized together, you cannot update drivers for a device if the installation program has loaded drivers for a similar device. For example, consider a system that has two different network adapters, one of which has a driver update available. The installation program will initialize both adapters at the same time, and therefore, you will not be able to use this driver update. Again, complete the installation with the drivers loaded by the installation program and update to the new drivers after installation, or use an initial RAM disk driver update.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/chap-Updating_drivers_during_installation_on_Intel_and_AMD_systems
|
Chapter 6. Examples
|
Chapter 6. Examples This chapter demonstrates the use of Red Hat build of Apache Qpid JMS through example programs. For more examples, see the Qpid JMS example suite and the Qpid JMS examples . 6.1. Configuring the JNDI context Applications using JMS typically use JNDI to obtain the ConnectionFactory and Destination objects used by the application. This keeps the configuration separate from the program and insulates it from the particular client implementation. For the purpose of using these examples, a file named jndi.properties should be placed on the classpath to configure the JNDI context, as detailed previously . The contents of the jndi.properties file should match what is shown below, which establishes that the client's InitialContextFactory implementation should be used, configures a ConnectionFactory to connect to a local server, and defines a destination queue named queue . 6.2. Sending messages This example first creates a JNDI Context , uses it to look up a ConnectionFactory and Destination , creates and starts a Connection using the factory, and then creates a Session . Then a MessageProducer is created to the Destination , and a message is sent using it. The Connection is then closed, and the program exits. A runnable variant of this Sender example is in the <source-dir>/qpid-jms-examples directory, along with the Hello World example covered previously in Chapter 3, Getting started . Example: Sending messages package org.jboss.amq.example; import jakarta.jms.Connection; import jakarta.jms.ConnectionFactory; import jakarta.jms.DeliveryMode; import jakarta.jms.Destination; import jakarta.jms.ExceptionListener; import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.MessageProducer; import jakarta.jms.Session; import jakarta.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Sender { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup"); Destination destination = (Destination) context.lookup("myDestinationLookup"); 2 Connection connection = factory.createConnection("<username>", "<password>"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageProducer messageProducer = session.createProducer(destination); 5 TextMessage message = session.createTextMessage("Message Text!"); 6 messageProducer.send(message, DeliveryMode.NON_PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE); 7 connection.close(); 8 } catch (Exception exp) { System.out.println("Caught exception, exiting."); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println("Connection ExceptionListener fired, exiting."); exception.printStackTrace(System.out); System.exit(1); } } } 1 Creates the JNDI Context to look up ConnectionFactory and Destination objects. The configuration is picked up from the jndi.properties file as detailed earlier . 2 The ConnectionFactory and Destination objects are retrieved from the JNDI Context using their lookup names. 3 The factory is used to create the Connection , which then has an ExceptionListener registered and is then started. The credentials given when creating the connection will typically be taken from an appropriate external configuration source, ensuring they remain separate from the application itself and can be updated independently. 4 A non-transacted, auto-acknowledge Session is created on the Connection . 5 The MessageProducer is created to send messages to the Destination . 6 A TextMessage is created with the given content. 7 The TextMessage is sent. It is sent non-persistent, with default priority and no expiration. 8 The Connection is closed. The Session and MessageProducer are closed implicitly. Note that this is only an example. A real-world application would typically use a long-lived MessageProducer and send many messages using it over time. Opening and then closing a Connection , Session , and MessageProducer per message is generally not efficient. 6.3. Receiving messages This example starts by creating a JNDI Context, using it to look up a ConnectionFactory and Destination , creating and starting a Connection using the factory, and then creates a Session . Then a MessageConsumer is created for the Destination , a message is received using it, and its contents are printed to the console. The Connection is then closed and the program exits. The same JNDI configuration is used as in the sending example . An executable variant of this Receiver example is contained within the examples directory of the client distribution, along with the Hello World example covered previously in Chapter 3, Getting started . Example: Receiving messages package org.jboss.amq.example; import jakarta.jms.Connection; import jakarta.jms.ConnectionFactory; import jakarta.jms.Destination; import jakarta.jms.ExceptionListener; import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.MessageConsumer; import jakarta.jms.Session; import jakarta.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Receiver { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup"); Destination destination = (Destination) context.lookup("myDestinationLookup"); 2 Connection connection = factory.createConnection("<username>", "<password>"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageConsumer messageConsumer = session.createConsumer(destination); 5 Message message = messageConsumer.receive(5000); 6 if (message == null) { 7 System.out.println("A message was not received within given time."); } else { System.out.println("Received message: " + ((TextMessage) message).getText()); } connection.close(); 8 } catch (Exception exp) { System.out.println("Caught exception, exiting."); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println("Connection ExceptionListener fired, exiting."); exception.printStackTrace(System.out); System.exit(1); } } } 1 Creates the JNDI Context to look up ConnectionFactory and Destination objects. The configuration is picked up from the jndi.properties file as detailed earlier . 2 The ConnectionFactory and Destination objects are retrieved from the JNDI Context using their lookup names. 3 The factory is used to create the Connection , which then has an ExceptionListener registered and is then started. The credentials given when creating the connection will typically be taken from an appropriate external configuration source, ensuring they remain separate from the application itself and can be updated independently. 4 A non-transacted, auto-acknowledge Session is created on the Connection . 5 The MessageConsumer is created to receive messages from the Destination . 6 A call to receive a message is made with a five second timeout. 7 The result is checked, and if a message was received, its contents are printed, or notice that no message was received. The result is cast explicitly to TextMessage as this is what we know the Sender sent. 8 The Connection is closed. The Session and MessageConsumer are closed implicitly. Note that this is only an example. A real-world application would typically use a long-lived MessageConsumer and receive many messages using it over time. Opening and then closing a Connection , Session , and MessageConsumer for each message is generally not efficient.
|
[
"Configure the InitialContextFactory class to use java.naming.factory.initial = org.apache.qpid.jms.jndi.JmsInitialContextFactory Configure the ConnectionFactory connectionfactory.myFactoryLookup = amqp://localhost:5672 Configure the destination queue.myDestinationLookup = queue",
"package org.jboss.amq.example; import jakarta.jms.Connection; import jakarta.jms.ConnectionFactory; import jakarta.jms.DeliveryMode; import jakarta.jms.Destination; import jakarta.jms.ExceptionListener; import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.MessageProducer; import jakarta.jms.Session; import jakarta.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Sender { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup(\"myFactoryLookup\"); Destination destination = (Destination) context.lookup(\"myDestinationLookup\"); 2 Connection connection = factory.createConnection(\"<username>\", \"<password>\"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageProducer messageProducer = session.createProducer(destination); 5 TextMessage message = session.createTextMessage(\"Message Text!\"); 6 messageProducer.send(message, DeliveryMode.NON_PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE); 7 connection.close(); 8 } catch (Exception exp) { System.out.println(\"Caught exception, exiting.\"); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println(\"Connection ExceptionListener fired, exiting.\"); exception.printStackTrace(System.out); System.exit(1); } } }",
"package org.jboss.amq.example; import jakarta.jms.Connection; import jakarta.jms.ConnectionFactory; import jakarta.jms.Destination; import jakarta.jms.ExceptionListener; import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.MessageConsumer; import jakarta.jms.Session; import jakarta.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Receiver { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup(\"myFactoryLookup\"); Destination destination = (Destination) context.lookup(\"myDestinationLookup\"); 2 Connection connection = factory.createConnection(\"<username>\", \"<password>\"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageConsumer messageConsumer = session.createConsumer(destination); 5 Message message = messageConsumer.receive(5000); 6 if (message == null) { 7 System.out.println(\"A message was not received within given time.\"); } else { System.out.println(\"Received message: \" + ((TextMessage) message).getText()); } connection.close(); 8 } catch (Exception exp) { System.out.println(\"Caught exception, exiting.\"); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println(\"Connection ExceptionListener fired, exiting.\"); exception.printStackTrace(System.out); System.exit(1); } } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/examples
|
Chapter 37. Log
|
Chapter 37. Log Only producer is supported The Log component logs message exchanges to the underlying logging mechanism. Camel uses SLF4J which allows you to configure logging via, among others: Log4j Logback Java Util Logging 37.1. URI format Where loggingCategory is the name of the logging category to use. You can append query options to the URI in the following format, ?option=value&option=value&... Note Using Logger instance from the Registry If there's single instance of org.slf4j.Logger found in the Registry, the loggingCategory is no longer used to create logger instance. The registered instance is used instead. Also it is possible to reference particular Logger instance using ?logger=#myLogger URI parameter. Eventually, if there's no registered and URI logger parameter, the logger instance is created using loggingCategory . For example, a log endpoint typically specifies the logging level using the level option, as follows: The default logger logs every exchange ( regular logging ). But Camel also ships with the Throughput logger, which is used whenever the groupSize option is specified. Note Also a log in the DSL There is also a log directly in the DSL, but it has a different purpose. Its meant for lightweight and human logs. See more details at LogEIP. 37.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 37.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 37.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 37.3. Component Options The Log component supports 3 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean exchangeFormatter (advanced) Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. ExchangeFormatter 37.4. Endpoint Options The Log endpoint is configured using URI syntax: with the following path and query parameters: 37.4.1. Path Parameters (1 parameters) Name Description Default Type loggerName (producer) Required Name of the logging category to use. String 37.4.2. Query Parameters (27 parameters) Name Description Default Type groupActiveOnly (producer) If true, will hide stats when no new messages have been received for a time interval, if false, show stats regardless of message traffic. true Boolean groupDelay (producer) Set the initial delay for stats (in millis). Long groupInterval (producer) If specified will group message stats by this time interval (in millis). Long groupSize (producer) An integer that specifies a group size for throughput logging. Integer lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean level (producer) Logging level to use. The default value is INFO. Enum values: TRACE DEBUG INFO WARN ERROR OFF INFO String logMask (producer) If true, mask sensitive information like password or passphrase in the log. Boolean marker (producer) An optional Marker name to use. String exchangeFormatter (advanced) To use a custom exchange formatter. ExchangeFormatter maxChars (formatting) Limits the number of characters logged per line. 10000 int multiline (formatting) If enabled then each information is outputted on a newline. false boolean showAll (formatting) Quick option for turning all options on. (multiline, maxChars has to be manually set if to be used). false boolean showAllProperties (formatting) Show all of the exchange properties (both internal and custom). false boolean showBody (formatting) Show the message body. true boolean showBodyType (formatting) Show the body Java type. true boolean showCaughtException (formatting) If the exchange has a caught exception, show the exception message (no stack trace). A caught exception is stored as a property on the exchange (using the key org.apache.camel.Exchange#EXCEPTION_CAUGHT) and for instance a doCatch can catch exceptions. false boolean showException (formatting) If the exchange has an exception, show the exception message (no stacktrace). false boolean showExchangeId (formatting) Show the unique exchange ID. false boolean showExchangePattern (formatting) Shows the Message Exchange Pattern (or MEP for short). true boolean showFiles (formatting) If enabled Camel will output files. false boolean showFuture (formatting) If enabled Camel will on Future objects wait for it to complete to obtain the payload to be logged. false boolean showHeaders (formatting) Show the message headers. false boolean showProperties (formatting) Show the exchange properties (only custom). Use showAllProperties to show both internal and custom properties. false boolean showStackTrace (formatting) Show the stack trace, if an exchange has an exception. Only effective if one of showAll, showException or showCaughtException are enabled. false boolean showStreams (formatting) Whether Camel should show stream bodies or not (eg such as java.io.InputStream). Beware if you enable this option then you may not be able later to access the message body as the stream have already been read by this logger. To remedy this you will have to use Stream Caching. false boolean skipBodyLineSeparator (formatting) Whether to skip line separators when logging the message body. This allows to log the message body in one line, setting this option to false will preserve any line separators from the body, which then will log the body as is. true boolean style (formatting) Sets the outputs style to use. Enum values: Default Tab Fixed Default OutputStyle 37.5. Regular logger sample In the route below we log the incoming orders at DEBUG level before the order is processed: from("activemq:orders").to("log:com.mycompany.order?level=DEBUG").to("bean:processOrder"); Or using Spring XML to define the route: <route> <from uri="activemq:orders"/> <to uri="log:com.mycompany.order?level=DEBUG"/> <to uri="bean:processOrder"/> </route> 37.6. Regular logger with formatter sample In the route below we log the incoming orders at INFO level before the order is processed. from("activemq:orders"). to("log:com.mycompany.order?showAll=true&multiline=true").to("bean:processOrder"); 37.7. Throughput logger with groupSize sample In the route below we log the throughput of the incoming orders at DEBUG level grouped by 10 messages. from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupSize=10").to("bean:processOrder"); 37.8. Throughput logger with groupInterval sample This route will result in message stats logged every 10s, with an initial 60s delay and stats should be displayed even if there isn't any message traffic. from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false").to("bean:processOrder"); The following will be logged: 37.9. Masking sensitive information like password You can enable security masking for logging by setting logMask flag to true . Note that this option also affects Log EIP. To enable mask in Java DSL at CamelContext level: camelContext.setLogMask(true); And in XML: <camelContext logMask="true"> You can also turn it on|off at endpoint level. To enable mask in Java DSL at endpoint level, add logMask=true option in the URI for the log endpoint: from("direct:start").to("log:foo?logMask=true"); And in XML: <route> <from uri="direct:foo"/> <to uri="log:foo?logMask=true"/> </route> org.apache.camel.support.processor.DefaultMaskingFormatter is used for the masking by default. If you want to use a custom masking formatter, put it into registry with the name CamelCustomLogMask . Note that the masking formatter must implement org.apache.camel.spi.MaskingFormatter . 37.10. Full customization of the logging output With the options outlined in the section, you can control much of the output of the logger. However, log lines will always follow this structure: This format is unsuitable in some cases, perhaps because you need to... Filter the headers and properties that are printed, to strike a balance between insight and verbosity. Adjust the log message to whatever you deem most readable. Tailor log messages for digestion by log mining systems, e.g. Splunk. Print specific body types differently. Whenever you require absolute customization, you can create a class that implements the interface. Within the format(Exchange) method you have access to the full Exchange, so you can select and extract the precise information you need, format it in a custom manner and return it. The return value will become the final log message. You can have the Log component pick up your custom ExchangeFormatter in either of two ways: Explicitly instantiating the LogComponent in your Registry: <bean name="log" class="org.apache.camel.component.log.LogComponent"> <property name="exchangeFormatter" ref="myCustomFormatter" /> </bean> 37.10.1. Convention over configuration Simply by registering a bean with the name logFormatter ; the Log Component is intelligent enough to pick it up automatically. <bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" /> Note The ExchangeFormatter gets applied to all Log endpoints within that Camel Context . If you need different ExchangeFormatters for different endpoints, just instantiate the LogComponent as many times as needed, and use the relevant bean name as the endpoint prefix. When using a custom log formatter, you can specify parameters in the log uri, which gets configured on the custom log formatter. Though when you do that you should define the "logFormatter" as prototype scoped so its not shared if you have different parameters, for example, <bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" scope="prototype"/> And then we can have Camel routes using the log uri with different options: <to uri="log:foo?param1=foo&param2=100"/> <to uri="log:bar?param1=bar&param2=200"/> 37.11. Spring Boot Auto-Configuration When using log with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-log-starter</artifactId> </dependency> The component supports 4 options, which are listed below. Name Description Default Type camel.component.log.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.log.enabled Whether to enable auto configuration of the log component. This is enabled by default. Boolean camel.component.log.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.log.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
|
[
"log:loggingCategory[?options]",
"log:org.apache.camel.example?level=DEBUG",
"log:loggerName",
"from(\"activemq:orders\").to(\"log:com.mycompany.order?level=DEBUG\").to(\"bean:processOrder\");",
"<route> <from uri=\"activemq:orders\"/> <to uri=\"log:com.mycompany.order?level=DEBUG\"/> <to uri=\"bean:processOrder\"/> </route>",
"from(\"activemq:orders\"). to(\"log:com.mycompany.order?showAll=true&multiline=true\").to(\"bean:processOrder\");",
"from(\"activemq:orders\"). to(\"log:com.mycompany.order?level=DEBUG&groupSize=10\").to(\"bean:processOrder\");",
"from(\"activemq:orders\"). to(\"log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false\").to(\"bean:processOrder\");",
"\"Received: 1000 new messages, with total 2000 so far. Last group took: 10000 millis which is: 100 messages per second. average: 100\"",
"camelContext.setLogMask(true);",
"<camelContext logMask=\"true\">",
"from(\"direct:start\").to(\"log:foo?logMask=true\");",
"<route> <from uri=\"direct:foo\"/> <to uri=\"log:foo?logMask=true\"/> </route>",
"Exchange[Id:ID-machine-local-50656-1234567901234-1-2, ExchangePattern:InOut, Properties:{CamelToEndpoint=log://org.apache.camel.component.log.TEST?showAll=true, CamelCreatedTimestamp=Thu Mar 28 00:00:00 WET 2013}, Headers:{breadcrumbId=ID-machine-local-50656-1234567901234-1-1}, BodyType:String, Body:Hello World, Out: null]",
"<bean name=\"log\" class=\"org.apache.camel.component.log.LogComponent\"> <property name=\"exchangeFormatter\" ref=\"myCustomFormatter\" /> </bean>",
"<bean name=\"logFormatter\" class=\"com.xyz.MyCustomExchangeFormatter\" />",
"<bean name=\"logFormatter\" class=\"com.xyz.MyCustomExchangeFormatter\" scope=\"prototype\"/>",
"<to uri=\"log:foo?param1=foo&param2=100\"/> <to uri=\"log:bar?param1=bar&param2=200\"/>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-log-starter</artifactId> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-log-component-starter
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_devices/making-open-source-more-inclusive
|
B.20. fence-agents
|
B.20. fence-agents B.20.1. RHBA-2011:0363 - fence-agents bug fix update An updated fence-agents package that fixes a bug is now available for Red Hat Enterprise Linux 6. Red Hat fence agents are a collection of scripts to handle remote power management for several devices. They allow failed or unreachable nodes to be forcibly restarted and removed from the cluster. Bug Fix BZ# 680522 A bug fix for a advisory, the RHEA-2010:0904 enhancement update, stated that the Brocade 200E, Brocade 300, Brocade 4100, Brocade 4900, and Brocade 5100 fencing devices are now supported by the fence_brocade agent. However, the fence_brocade agent was not included in the updated package. This update corrects this error, and the fence_brocade agent is now included in the package as expected. All users of fence-agents are advised to upgrade to this updated package, which resolves this issue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/fence-agents
|
Installing on AWS
|
Installing on AWS OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_aws/index
|
Chapter 4. Extending Red Hat Software Collections
|
Chapter 4. Extending Red Hat Software Collections This chapter describes extending some of the Software Collections that are part of the Red Hat Software Collections offering. 4.1. Providing an scldevel Subpackage The purpose of an scldevel subpackage is to make the process of creating dependent Software Collections easier by providing a number of generic macro files. Packagers then use these macro files when they are extending existing Software Collections. scldevel is provided as a subpackage of your Software Collection's metapackage. 4.1.1. Creating an scldevel Subpackage The following section describes creating an scldevel subpackage for two examples of Ruby Software Collections, ruby193 and ruby200. Procedure 4.1. Providing your own scldevel subpackage In your Software Collection's metapackage, add the scldevel subpackage by defining its name, summary, and description: %package scldevel Summary: Package shipping development files for %scl Provides: scldevel(%{scl_name_base}) %description scldevel Package shipping development files, especially useful for development of packages depending on %scl Software Collection. You are advised to use the virtual Provides: scldevel(%{scl_name_base}) during the build of packages of dependent Software Collections. This will ensure availability of a version of the %{scl_name_base} Software Collection and its macros, as specified in the following step. In the %install section of your Software Collection's metapackage, create the macros.%{scl_name_base}-scldevel file that is part of the scldevel subpackage and contains: cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF Note that between all Software Collections that share the same %{scl_name_base} name, the provided macros.%{scl_name_base}-scldevel files must conflict. This is to disallow installing multiple versions of the %{scl_name_base} Software Collections. For example, the ruby193-scldevel subpackage cannot be installed when there is the ruby200-scldevel subpackage installed. 4.1.2. Using an scldevel Subpackage in a Dependent Software Collection To use your scldevel subpackage in a Software Collection that depends on the ruby200 Software Collection, update the metapackage of the dependent Software Collection as described below. Procedure 4.2. Using your own scldevel subpackage in a dependent Software Collection Consider adding the following at the beginning of the metapackage's spec file: %{!?scl_ruby:%global scl_ruby ruby200} %{!?scl_prefix_ruby:%global scl_prefix_ruby %{scl_ruby}-} These two lines are optional. They are only meant as a visual hint that the dependent Software Collection has been designed to depend on the ruby200 Software Collection. If there is no other scldevel subpackage available in the build root, then the ruby200-scldevel subpackage is used as a build requirement. You can substitute these lines with the following line: %{?scl_prefix_ruby} Add the following build requirement to the metapackage: BuildRequires: %{scl_prefix_ruby}scldevel By specifying this build requirement, you ensure that the scldevel subpackage is in the build root and that the default values are not in use. Omitting this package could result in broken requires at the subsequent packages' build time. Ensure that the %package runtime part of the metapackage's spec file includes the following lines: %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils Requires: %{scl_prefix_ruby}runtime Consider including the following lines in the %package build part of the metapackage's spec file: %package build Summary: Package shipping basic build configuration Requires: %{scl_prefix_ruby}scldevel Specifying Requires: %{scl_prefix_ruby}scldevel ensures that macros are available in all packages of the Software Collection. Note that adding this Requires only makes sense in specific use cases, such as where packages in a dependent Software Collection use macros provided by the scldevel subpackage.
|
[
"%package scldevel Summary: Package shipping development files for %scl Provides: scldevel(%{scl_name_base}) %description scldevel Package shipping development files, especially useful for development of packages depending on %scl Software Collection.",
"cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF",
"%{!?scl_ruby:%global scl_ruby ruby200} %{!?scl_prefix_ruby:%global scl_prefix_ruby %{scl_ruby}-}",
"%{?scl_prefix_ruby}",
"BuildRequires: %{scl_prefix_ruby}scldevel",
"%package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils Requires: %{scl_prefix_ruby}runtime",
"%package build Summary: Package shipping basic build configuration Requires: %{scl_prefix_ruby}scldevel"
] |
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/chap-extending_red_hat_software_collections
|
Chapter 4. Installing Red Hat build of OpenJDK with the MSI installer
|
Chapter 4. Installing Red Hat build of OpenJDK with the MSI installer This procedure describes how to install Red Hat build of OpenJDK 11 for Microsoft Windows using the MSI-based installer. Procedure Download the MSI-based installer of Red Hat build of OpenJDK 11 for Microsoft Windows. Run the installer for Red Hat build of OpenJDK 11 for Microsoft Windows. Click on the welcome screen. Check I accept the terms in license agreement , then click . Click . Accept the defaults or review the optional properties . Click Install . Click Yes on the Do you want to allow this app to make changes on your device? . Verify the Red Hat build of OpenJDK 11 for Microsoft Windows is successfully installed, run java -version command in the command prompt and you must get the following output:
|
[
"openjdk version \"11.0.3-redhat\" 2019-04-16 LTS OpenJDK Runtime Environment 18.9 (build 11.0.3-redhat+7-LTS) OpenJDK 64-Bit Server VM 18.9 (build 11.0.3-redhat+7-LTS, mixed mode)"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/installing_and_using_red_hat_build_of_openjdk_11_for_windows/installing_openjdk_msi_installer
|
Chapter 3. Configuring RBAC settings
|
Chapter 3. Configuring RBAC settings When you install Cryostat 3.0 by using either the Cryostat Operator or a Helm chart, Cryostat includes a reverse proxy ( openshift-oauth-proxy or oauth2_proxy ) in the pod. All API requests to Cryostat and all users of the Cryostat web console or Grafana dashboard are directed through this proxy, which handles client sessions to control access to the application. When deployed on Red Hat OpenShift, the proxy uses the Cryostat installation namespace to perform RBAC checks for user authentication and authorization by integrating with the Red Hat OpenShift cluster SSO provider. From Cryostat 3.0 onward, Cryostat applies the same role-based access control (RBAC) permission check to all users for the purpose of permitting or denying access to the product. By default, the required RBAC role in the Cryostat application's installation namespace is create pods/exec . Any Red Hat OpenShift user accounts that are assigned the required RBAC role have full access to the Cryostat web console and all Cryostat features. If a Red Hat OpenShift account does not have the required RBAC role, this user is blocked from accessing Cryostat. Note You can optionally configure the auth proxy with an htpasswd file to enable Basic authentication. On Red Hat OpenShift, this enables you to define additional user accounts that can access Cryostat beyond those with Red Hat OpenShift SSO RBAC access. When installing a Cryostat instance by using the Cryostat Operator, you can optionally use the .spec.authorizationOptions.openShiftSSO.accessReview field in the Cryostat custom resource (CR) to customize the required Red Hat OpenShift SSO RBAC permissions for accessing Cryostat. Prerequisites Logged in to the OpenShift Container Platform by using the Red Hat OpenShift web console. Procedure If you want to start creating a Cryostat instance, perform the following steps: On your Red Hat OpenShift web console, click Operators > Installed Operators . From the list of available Operators, select Red Hat build of Cryostat. On the Operator details page, click the Details tab. In the Provided APIs section, select Cryostat and then click Create instance . On the Create Cryostat panel, to customize the required SubjectAccessReview or TokenAccessReview for all client access to Cryostat, choose one of the following options: If you are using the Form view: Click the Form view radio button. To open additional options, expand Advanced Configurations to open additional options. Expand the Authorization Options > OpenShift SSO > Access Review section of the Cryostat CR. Figure 3.1. Access Review properties for a Cryostat instance Use the following fields to specify any customized RBAC settings that are required for accessing Cryostat: Field Details group API group of the resource. A wilcard asterisk ( * ) value represents all groups. name Name of the resource being requested for a get or deleted for a delete . An empty value represents all names. namespace Namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces. Consider the following guidelines: An empty value is defaulted for LocalSubjectAccessReviews. An empty value represents no cluster-scoped resources. An empty value represents all namespace-scoped resources from a SubjectAccessReview or SelfSubjectAccessReview. resource An existing resource type. A wildcard asterisk ( * ) value represents all resource types. subresource An existing resource type. An empty value represents no resource types. verb A Kubernetes resource API verb (for example, get , list , watch , create , update , delete , proxy ). A wildcard asterisk ( * ) value represents all verbs. version API version of the resource. A wildcard asterisk ( * ) value represents all versions. If you are using the YAML view: Click the YAML view radio button. From the spec: element, edit the authorizationOptions:OpenShiftSSO properties to match your RBAC permission requirements. Example configuration for RBAC permissions apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample namespace: cryostat-test spec: ... authorizationOptions: openShiftSSO: accessReview: group: <API group of resource> name: <Name of resource being requested or deleted> namespace: <Namespace of action being requested> resource: <An existing resource type> subresource: <An existig resource type> verb: <A Kubernetes resource API verb> version: <API version of resource> ... If you want to configure other properties in the custom resource (CR) for this Cryostat instance, see the other sections of this document for more information about these properties. If you want to finish creating this Cryostat instance, click Create . When you click Create , this Cryostat instance is available under the Cryostat tab on the Operator details page. You can subsequently edit the CR properties for a Cryostat instance by clicking the instance name on the Operator details page and then select Edit Cryostat from the Actions drop-down menu. Revised on 2024-07-02 13:37:27 UTC
|
[
"apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample namespace: cryostat-test spec: authorizationOptions: openShiftSSO: accessReview: group: <API group of resource> name: <Name of resource being requested or deleted> namespace: <Namespace of action being requested> resource: <An existing resource type> subresource: <An existig resource type> verb: <A Kubernetes resource API verb> version: <API version of resource>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_the_red_hat_build_of_cryostat_operator_to_configure_cryostat/configuring-rbac-settings_assembly_cryostat-operator
|
Part II. Updating an integration
|
Part II. Updating an integration You can pause, resume, or remove your integrations in Red Hat Hybrid Cloud Console .
| null |
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_microsoft_azure_data_into_cost_management/assembly-updating-int
|
Chapter 6. Expanding persistent volumes
|
Chapter 6. Expanding persistent volumes 6.1. Enabling volume expansion support Before you can expand persistent volumes, the StorageClass object must have the allowVolumeExpansion field set to true . Procedure Edit the StorageClass object and add the allowVolumeExpansion attribute by running the following command: USD oc edit storageclass <storage_class_name> 1 1 Specifies the name of the storage class. The following example demonstrates adding this line at the bottom of the storage class configuration. apiVersion: storage.k8s.io/v1 kind: StorageClass ... parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1 1 Setting this attribute to true allows PVCs to be expanded after creation. 6.2. Expanding CSI volumes You can use the Container Storage Interface (CSI) to expand storage volumes after they have already been created. OpenShift Container Platform supports CSI volume expansion by default. However, a specific CSI driver is required. OpenShift Container Platform 4.10 supports version 1.1.0 of the CSI specification . Important Expanding CSI volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.3. Expanding FlexVolume with a supported driver When using FlexVolume to connect to your back-end storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in OpenShift Container Platform. FlexVolume allows expansion if the driver is set with RequiresFSResize to true . The FlexVolume can be expanded on pod restart. Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod. Prerequisites The underlying volume driver supports resize. The driver is set with the RequiresFSResize capability to true . Dynamic provisioning is used. The controlling StorageClass object has allowVolumeExpansion set to true . Procedure To use resizing in the FlexVolume plugin, you must implement the ExpandableVolumePlugin interface using these methods: RequiresFSResize If true , updates the capacity directly. If false , calls the ExpandFS method to finish the filesystem resize. ExpandFS If true , calls ExpandFS to resize filesystem after physical volume expansion is done. The volume driver can also perform physical volume resize together with filesystem resize. Important Because OpenShift Container Platform does not support installation of FlexVolume plugins on control plane nodes, it does not support control-plane expansion of FlexVolume. 6.4. Expanding local volumes You can manually expand persistent volumes (PVs) and persistent volume claims (PVCs) created by using the local storage operator (LSO). Procedure Expand the underlying devices, and ensure that appropriate capacity is available on theses devices. Update the corresponding PV objects to match the new device sizes by editing the .spec.capacity field of the PV. For the storage class that is used for binding the PVC to PVet, set allowVolumeExpansion:true . For the PVC, set .spec.resources.requests.storage to match the new size. Kubelet should automatically expand the underlying file system on the volume, if necessary, and update the status field of the PVC to reflect the new size. 6.5. Expanding persistent volume claims (PVCs) with a file system Expanding PVCs based on volume types that need file system resizing, such as GCE PD, EBS, and Cinder, is a two-step process. This process involves expanding volume objects in the cloud provider, and then expanding the file system on the actual node. Expanding the file system on the node only happens when a new pod is started with the volume. Prerequisites The controlling StorageClass object must have allowVolumeExpansion set to true . Procedure Edit the PVC and request a new size by editing spec.resources.requests . For example, the following expands the ebs PVC to 8 Gi. kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: "storageClassWithFlagSet" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1 1 Updating spec.resources.requests to a larger amount will expand the PVC. After the cloud provider object has finished resizing, the PVC is set to FileSystemResizePending . Check the condition by entering the following command: USD oc describe pvc <pvc_name> When the cloud provider object has finished resizing, the PersistentVolume object reflects the newly requested size in PersistentVolume.Spec.Capacity . At this point, you can create or recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the newly requested size is available and the FileSystemResizePending condition is removed from the PVC. 6.6. Recovering from failure when expanding volumes If expanding underlying storage fails, the OpenShift Container Platform administrator can manually recover the persistent volume claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention. Procedure Mark the persistent volume (PV) that is bound to the PVC with the Retain reclaim policy. This can be done by editing the PV and changing persistentVolumeReclaimPolicy to Retain . Delete the PVC. This will be recreated later. To ensure that the newly created PVC can bind to the PV marked Retain , manually edit the PV and delete the claimRef entry from the PV specs. This marks the PV as Available . Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage provider. Set the volumeName field of the PVC to the name of the PV. This binds the PVC to the provisioned PV only. Restore the reclaim policy on the PV.
|
[
"oc edit storageclass <storage_class_name> 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/storage/expanding-persistent-volumes
|
Chapter 11. Replacing Controller nodes
|
Chapter 11. Replacing Controller nodes In certain circumstances a Controller node in a high availability cluster might fail. In these situations, you must remove the node from the cluster and replace it with a new Controller node. Complete the steps in this section to replace a Controller node. The Controller node replacement process involves running the openstack overcloud deploy command to update the overcloud with a request to replace a Controller node. Important The following procedure applies only to high availability environments. Do not use this procedure if you are using only one Controller node. 11.1. Preparing for Controller replacement Before you replace an overcloud Controller node, it is important to check the current state of your Red Hat OpenStack Platform environment. Checking the current state can help avoid complications during the Controller replacement process. Use the following list of preliminary checks to determine if it is safe to perform a Controller node replacement. Run all commands for these checks on the undercloud. Procedure Check the current status of the overcloud stack on the undercloud: Only continue if the overcloud stack has a deployment status of DEPLOY_SUCCESS . Install the database client tools: Configure root user access to the database: Perform a backup of the undercloud databases: Check that your undercloud contains 10 GB free storage to accommodate for image caching and conversion when you provision the new node: If you are reusing the IP address for the new controller node, ensure that you delete the port used by the old controller: Check the status of Pacemaker on the running Controller nodes. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to view the Pacemaker status: The output shows all services that are running on the existing nodes and those that are stopped on the failed node. Check the following parameters on each node of the overcloud MariaDB cluster: wsrep_local_state_comment: Synced wsrep_cluster_size: 2 Use the following command to check these parameters on each running Controller node. In this example, the Controller node IP addresses are 192.168.0.47 and 192.168.0.46: Check the RabbitMQ status. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to view the RabbitMQ status: The running_nodes key should show only the two available nodes and not the failed node. If fencing is enabled, disable it. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to check the status of fencing: Run the following command to disable fencing: Login to the failed Controller node and stop all the nova_* containers that are running: If you are using the Bare Metal Service (ironic) as the virt driver, you must reuse the hostname when replacing the Controller node. Reusing the hostname prevents the Compute service (nova) database from being corrupted and prevents the workload from needing to be rebalanced when the Bare Metal Provisioning service is redeployed. 11.2. Removing a Ceph Monitor daemon If your Controller node is running a Ceph monitor service, complete the following steps to remove the ceph-mon daemon. Note Adding a new Controller node to the cluster also adds a new Ceph monitor daemon automatically. Procedure Connect to the Controller node that you want to replace: List the Ceph mon services: Stop the Ceph mon service: Disable the Ceph mon service: Disconnect from the Controller node that you want to replace. Use SSH to connect to another Controller node in the same cluster: The Ceph specification file is modified and applied later in this procedure, to manipulate the file you must export it: Remove the monitor from the cluster: Disconnect from the Controller node and log back into the Controller node you are removing from the cluster: List the Ceph mgr services: Stop the Ceph mgr service: Disable the Ceph mgr service: Start a cephadm shell: Verify that the Ceph mgr service for the Controller node is removed from the cluster: The node is not listed if the Ceph mgr service is successfully removed. Export the Red Hat Ceph Storage specification: In the spec.yaml specification file, remove all instances of the host, for example controller-0 , from the service_type: mon and service_type: mgr . Reapply the Red Hat Ceph Storage specification: Verify that no Ceph daemons remain on the removed host: Note If daemons are present, use the following command to remove them: Prior to running the ceph orch host drain command, backup the contents of /etc/ceph . Restore the contents after running the ceph orch host drain command. You must back up prior to running the ceph orch host drain command until https://bugzilla.redhat.com/show_bug.cgi?id=2153827 is resolved. Remove the controller-0 host from the Red Hat Ceph Storage cluster: Exit the cephadm shell: Additional Resources For more information on controlling Red Hat Ceph Storage services with systemd, see Understanding process management for Ceph . For more information on editing and applying Red Hat Ceph Storage specification files, see Deploying the Ceph monitor daemons using the service specification . 11.3. Preparing the cluster for Controller node replacement Before you replace the node, ensure that Pacemaker is not running on the node and then remove that node from the Pacemaker cluster. Procedure To view the list of IP addresses for the Controller nodes, run the following command: Log in to the node and confirm the pacemaker status. If pacemaker is running, use the pcs cluster command to stop pacemaker. This example stops pacemaker on overcloud-controller-0 : Note In the case that the node is physically unavailable or stopped, it is not necessary to perform the operation, as pacemaker is already stopped on that node. After you stop Pacemaker on the node, delete the node from the pacemaker cluster. The following example logs in to overcloud-controller-1 to remove overcloud-controller-0 : If the node that that you want to replace is unreachable (for example, due to a hardware failure), run the pcs command with additional --skip-offline and --force options to forcibly remove the node from the cluster: After you remove the node from the pacemaker cluster, remove the node from the list of known hosts in pacemaker: You can run this command whether the node is reachable or not. To ensure that the new Controller node uses the correct STONITH fencing device after replacement, delete the devices from the node by entering the following command: Replace <stonith_resource_name> with the name of the STONITH resource that corresponds to the node. The resource name uses the format <resource_agent>-<host_mac> . You can find the resource agent and the host MAC address in the FencingConfig section of the fencing.yaml file. The overcloud database must continue to run during the replacement procedure. To ensure that Pacemaker does not stop Galera during this procedure, select a running Controller node and run the following command on the undercloud with the IP address of the Controller node: Remove the OVN northbound database server for the replaced Controller node from the cluster: Obtain the server ID of the OVN northbound database server to be replaced: Replace <controller_ip> with the IP address of any active Controller node. You should see output similar to the following: In this example, 172.17.1.55 is the internal IP address of the Controller node that is being replaced, so the northbound database server ID is 96da . Using the server ID you obtained in the preceding step, remove the OVN northbound database server by running the following command: In this example, you would replace 172.17.1.52 with the IP address of any active Controller node, and replace 96da with the server ID of the OVN northbound database server. Remove the OVN southbound database server for the replaced Controller node from the cluster: Obtain the server ID of the OVN southbound database server to be replaced: Replace <controller_ip> with the IP address of any active Controller node. You should see output similar to the following: In this example, 172.17.1.55 is the internal IP address of the Controller node that is being replaced, so the southbound database server ID is e544 . Using the server ID you obtained in the preceding step, remove the OVN southbound database server by running the following command: In this example, you would replace 172.17.1.52 with the IP address of any active Controller node, and replace e544 with the server ID of the OVN southbound database server. Run the following clean up commands to prevent cluster rejoins. Substitute <replaced_controller_ip> with the IP address of the Controller node that you are replacing: 11.4. Removing the controller node from IdM If your nodes are protected with TLSe, you must remove the host and DNS entries from the IdM (Identity Management) server. On your IdM server, remove all DNS entries for the controller node from IDM: Replace <host name> with the host name of your controller Replace <domain name> with domain name of your controller On the IdM server, remove the client host entry from the IdM LDAP server. This removes all services and revokes all certificates issued for that host: 11.5. Replacing a bootstrap Controller node If you want to replace the Controller node that you use for bootstrap operations and keep the node name, complete the following steps to set the name of the bootstrap Controller node after the replacement process. Procedure Find the name of the bootstrap Controller node by running the following command: Replace <controller_ip> with the IP address of any active Controller node. Check if your environment files include the ExtraConfig and AllNodesExtraMapData parameters. If the parameters are not set, create the following environment file ~/templates/bootstrap-controller.yaml and add the following content: Replace NODE_NAME with the name of an existing Controller node that you want to use in bootstrap operations after the replacement process. Replace NODE_IP with the IP address mapped to the Controller named in NODE_NAME . To get the name, run the following command: If your environment files already include the ExtraConfig and AllNodesExtraMapData parameters, add only the lines shown in this step. For information about troubleshooting the bootstrap Controller node replacement, see the article Replacement of the first Controller node fails at step 1 if the same hostname is used for a new node . 11.6. Unprovision and remove Controller nodes You can unprovision and remove Controller nodes. Procedure Source the stackrc file: Identify the UUID of the overcloud-controller-0 node: Set the node to maintenance mode: Copy the overcloud-baremetal-deploy.yaml file: In the unprovision_controller-0.yaml file, lower the Controller count to unprovision the Controller node that you are replacing. In this example, the count is reduced from 3 to 2 . Move the controller-0 node to the instances dictionary and set the provisioned parameter to false : Run the node unprovision command: Optional: Delete the ironic node: Replace IRONIC_NODE_UUID with the UUID of the node. 11.7. Deploying a new controller node to the overcloud To deploy a new controller node to the overcloud complete the following steps. Prerequisites The new Controller node must be registered, inspected, and tagged ready for provisioning. For more information, see Provisioning bare metal overcloud nodes . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc If you want to use the same scheduling, placement, or IP addresses you can edit the overcloud-baremetal-deploy.yaml environment file. Set the hostname , name , and networks for the new controller-0 instance in the instances section: 1 1 If you are using the Bare Metal Service (ironic) as the virt driver, you must reuse the hostname when replacing the Controller node. Reusing the hostname prevents the Compute service (nova) database from being corrupted and prevents the workload from needing to be rebalanced when the Bare Metal Provisioning service is redeployed. Provision the overcloud: If you added a new controller-0 instance, then remove the instances section from the overcloud-baremetal-deploy.yaml file when the node is provisioned. To create the cephadm user on the new Controller node, export a basic Ceph specification containing the new host information: Note If your environment uses a custom role, include the --roles-data option. Add the cephadm user to the new Controller node: Log in to the Controller node and add the new role to the Ceph cluster: Replace <IP_ADDRESS> with the IP address of the Controller node. Replace <LABELS> with any required Ceph labels. Re-run the openstack overcloud deploy command: 1 Specifies the custom network configuration. Required if you use network isolation or custom composable networks. 2 Include the generated roles data if you use custom roles or want to enable a multi-architecture cloud. Note If the replacement Controller node is the bootstrap node, include the bootstrap_node.yaml environment file. 11.8. Deploying Ceph services on the new controller node After you provision a new Controller node and the Ceph monitor services are running you can deploy the mgr , rgw and osd Ceph services on the Controller node. Prerequisites The new Controller node is provisioned and is running Ceph monitor services. Procedure Modify the spec.yml environment file, replace the Controller node name with the new Controller node name: Note Do not use the basic Ceph environment file ceph_spec_host.yaml as it does not contain all necessary cluster information. Apply the modified Ceph specification file: Verify the visibility of the new monitor: . 11.9. Cleaning up after Controller node replacement After you complete the node replacement, you can finalize the Controller cluster. Procedure Log into a Controller node. Enable Pacemaker management of the Galera cluster and start Galera on the new node: Enable fencing: Perform a final status check to ensure that the services are running correctly: Note If any services have failed, use the pcs resource refresh command to resolve and restart the failed services. Exit to director: Source the overcloudrc file so that you can interact with the overcloud: Check the network agents in your overcloud environment: If any agents appear for the old node, remove them: If necessary, add your router to the L3 agent host on the new node. Use the following example command to add a router named r1 to the L3 agent using the UUID 2d1c1dc1-d9d4-4fa9-b2c8-f29cd1a649d4: Clean the cinder services. List the cinder services: Log in to a controller node, connect to the cinder-api container and use the cinder-manage service remove command to remove leftover services: Clean the RabbitMQ cluster. Log into a Controller node. Use the podman exec command to launch bash, and verify the status of the RabbitMQ cluster: Use the rabbitmqctl command to forget the replaced controller node: If you replaced a bootstrap Controller node, you must remove the environment file ~/templates/bootstrap-controller.yaml after the replacement process, or delete the pacemaker_short_bootstrap_node_name and mysql_short_bootstrap_node_name parameters from your existing environment file. This step prevents director from attempting to override the Controller node name in subsequent replacements. For more information, see Replacing a bootstrap Controller node . If you are using the Object Storage service (swift) on the overcloud, you must synchronize the swift rings after updating the overcloud nodes. Use a script, similar to the following example, to distribute ring files from a previously existing Controller node (Controller node 0 in this example) to all Controller nodes and restart the Object Storage service containers on those nodes: Fetch the current set of ring files: Upload rings to all nodes, put them into the correct place, and restart swift services:
|
[
"source stackrc openstack overcloud status",
"sudo dnf -y install mariadb",
"sudo cp /var/lib/config-data/puppet-generated/mysql/root/.my.cnf /root/.",
"mkdir /home/stack/backup sudo mysqldump --all-databases --quick --single-transaction | gzip > /home/stack/backup/dump_db_undercloud.sql.gz",
"df -h",
"openstack port delete <port>",
"ssh [email protected] 'sudo pcs status'",
"for i in 192.168.0.46 192.168.0.47 ; do echo \"*** USDi ***\" ; ssh tripleo-admin@USDi \"sudo podman exec \\USD(sudo podman ps --filter name=galera-bundle -q) mysql -e \\\"SHOW STATUS LIKE 'wsrep_local_state_comment'; SHOW STATUS LIKE 'wsrep_cluster_size';\\\"\"; done",
"ssh [email protected] \"sudo podman exec \\USD(sudo podman ps -f name=rabbitmq-bundle -q) rabbitmqctl cluster_status\"",
"ssh [email protected] \"sudo pcs property show stonith-enabled\"",
"ssh [email protected] \"sudo pcs property set stonith-enabled=false\"",
"sudo systemctl stop tripleo_nova_api.service sudo systemctl stop tripleo_nova_api_cron.service sudo systemctl stop tripleo_nova_conductor.service sudo systemctl stop tripleo_nova_metadata.service sudo systemctl stop tripleo_nova_scheduler.service",
"ssh [email protected]",
"sudo systemctl --type=service | grep ceph ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@crash.controller-0.service loaded active running Ceph crash.controller-0 for 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31 ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@mgr.controller-0.mufglq.service loaded active running Ceph mgr.controller-0.mufglq for 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31 ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@mon.controller-0.service loaded active running Ceph mon.controller-0 for 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31 ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@rgw.rgw.controller-0.ikaevh.service loaded active running Ceph rgw.rgw.controller-0.ikaevh for 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31",
"sudo systemtctl stop ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@mon.controller-0.service",
"sudo systemctl disable ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@mon.controller-0.service",
"ssh [email protected]",
"sudo cephadm shell -- ceph orch ls --export > spec.yaml",
"sudo cephadm shell -- ceph mon remove controller-0 removing mon.controller-0 at [v2:172.23.3.153:3300/0,v1:172.23.3.153:6789/0], there will be 2 monitors",
"ssh [email protected]",
"sudo systemctl --type=service | grep ceph ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@crash.controller-0.service loaded active running Ceph crash.controller-0 for 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31 ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@mgr.controller-0.mufglq.service loaded active running Ceph mgr.controller-0.mufglq for 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31 ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@rgw.rgw.controller-0.ikaevh.service loaded active running Ceph rgw.rgw.controller-0.ikaevh for 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31",
"sudo systemctl stop ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@mgr.controller-0.mufglq.service",
"sudo systemctl disable ceph-4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31@mgr.controller-0.mufglq.service",
"sudo cephadm shell",
"ceph -s cluster: id: b9b53581-d590-41ac-8463-2f50aa985001 health: HEALTH_OK services: mon: 2 daemons, quorum controller-2,controller-1 (age 2h) mgr: controller-2(active, since 20h), standbys: controller1-1 osd: 15 osds: 15 up (since 3h), 15 in (since 3h) data: pools: 3 pools, 384 pgs objects: 32 objects, 88 MiB usage: 16 GiB used, 734 GiB / 750 GiB avail pgs: 384 active+clean",
"ceph orch ls --export > spec.yaml",
"ceph orch apply -i spec.yaml",
"ceph orch ps controller-0",
"ceph orch host drain controller-0",
"ceph orch host rm controller-0 Removed host 'controller-0'",
"exit",
"(undercloud)USD metalsmith -c Hostname -c \"IP Addresses\" list +------------------------+-----------------------+ | Hostname | IP Addresses | +------------------------+-----------------------+ | overcloud-compute-0 | ctlplane=192.168.0.44 | | overcloud-controller-0 | ctlplane=192.168.0.47 | | overcloud-controller-1 | ctlplane=192.168.0.45 | | overcloud-controller-2 | ctlplane=192.168.0.46 | +------------------------+-----------------------+",
"(undercloud) USD ssh [email protected] \"sudo pcs status | grep -w Online | grep -w overcloud-controller-0\" (undercloud) USD ssh [email protected] \"sudo pcs cluster stop overcloud-controller-0\"",
"(undercloud) USD ssh [email protected] \"sudo pcs cluster node remove overcloud-controller-0\"",
"(undercloud) USD ssh [email protected] \"sudo pcs cluster node remove overcloud-controller-0 --skip-offline --force\"",
"(undercloud) USD ssh [email protected] \"sudo pcs host deauth overcloud-controller-0\"",
"(undercloud) USD ssh [email protected] \"sudo pcs stonith delete <stonith_resource_name>\"",
"(undercloud) USD ssh [email protected] \"sudo pcs resource unmanage galera-bundle\"",
"ssh tripleo-admin@<controller_ip> sudo podman exec ovn_cluster_north_db_server ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound 2>/dev/null|grep -A4 Servers:",
"Servers: 96da (96da at tcp:172.17.1.55:6643) (self) next_index=26063 match_index=26063 466b (466b at tcp:172.17.1.51:6643) next_index=26064 match_index=26063 last msg 2936 ms ago ba77 (ba77 at tcp:172.17.1.52:6643) next_index=26064 match_index=26063 last msg 2936 ms ago",
"ssh [email protected] sudo podman exec ovn_cluster_north_db_server ovs-appctl -t /var/run/ovn/ovnnb_db.ctl cluster/kick OVN_Northbound 96da",
"ssh tripleo-admin@<controller_ip> sudo podman exec ovn_cluster_south_db_server ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound 2>/dev/null|grep -A4 Servers:",
"Servers: e544 (e544 at tcp:172.17.1.55:6644) last msg 42802690 ms ago 17ca (17ca at tcp:172.17.1.51:6644) last msg 5281 ms ago 6e52 (6e52 at tcp:172.17.1.52:6644) (self)",
"ssh [email protected] sudo podman exec ovn_cluster_south_db_server ovs-appctl -t /var/run/ovn/ovnsb_db.ctl cluster/kick OVN_Southbound e544",
"ssh tripleo-admin@<replaced_controller_ip> sudo systemctl disable --now tripleo_ovn_cluster_south_db_server.service tripleo_ovn_cluster_north_db_server.service ssh tripleo-admin@<replaced_controller_ip> sudo rm -rfv /var/lib/openvswitch/ovn/.ovn* /var/lib/openvswitch/ovn/ovn*.db",
"ipa dnsrecord-del Record name: <host name> 1 Zone name: <domain name> 2 No option to delete specific record provided. Delete all? Yes/No (default No): yes ------------------------ Deleted record \"<host name>\"",
"ipa host-del client.idm.example.com",
"ssh tripleo-admin@<controller_ip> \"sudo hiera -c /etc/puppet/hiera.yaml pacemaker_short_bootstrap_node_name\"",
"parameter_defaults: ExtraConfig: pacemaker_short_bootstrap_node_name: NODE_NAME mysql_short_bootstrap_node_name: NODE_NAME AllNodesExtraMapData: ovn_dbs_bootstrap_node_ip: NODE_IP ovn_dbs_short_bootstrap_node_name: NODE_NAME",
"sudo hiera -c /etc/puppet/hiera.yaml ovn_dbs_node_ips",
"source ~/stackrc",
"(undercloud)USD NODE=USD(metalsmith -c UUID -f value show overcloud-controller-0)",
"openstack baremetal node maintenance set USDNODE",
"cp /home/stack/templates/overcloud-baremetal-deploy.yaml /home/stack/templates/unprovision_controller-0.yaml",
"- name: Controller count: 2 hostname_format: controller-%index% defaults: resource_class: BAREMETAL.controller networks: [ ... ] instances: - hostname: controller-0 name: <IRONIC_NODE_UUID_or_NAME> provisioned: false - name: Compute count: 2 hostname_format: compute-%index% defaults: resource_class: BAREMETAL.compute networks: [ ... ]",
"openstack overcloud node delete --stack overcloud --baremetal-deployment /home/stack/templates/unprovision_controller-0.yaml",
"The following nodes will be unprovisioned: +--------------+-------------------------+--------------------------------------+ | hostname | name | id | +--------------+-------------------------+--------------------------------------+ | controller-0 | baremetal-35400-leaf1-2 | b0d5abf7-df28-4ae7-b5da-9491e84c21ac | +--------------+-------------------------+--------------------------------------+ Are you sure you want to unprovision these overcloud nodes and ports [y/N]?",
"openstack baremetal node delete <IRONIC_NODE_UUID>",
"- name: Controller count: 3 hostname_format: controller-%index% defaults: resource_class: BAREMETAL.controller networks: - network: external subnet: external_subnet - network: internal_api subnet: internal_api_subnet01 - network: storage subnet: storage_subnet01 - network: storage_mgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01 network_config: template: templates/multiple_nics/multiple_nics_dvr.j2 default_route_network: - external instances: - hostname: controller-0 1 name: baremetal-35400-leaf1-2 networks: - network: external subnet: external_subnet fixed_ip: 10.0.0.224 - network: internal_api subnet: internal_api_subnet01 fixed_ip: 172.17.0.97 - network: storage subnet: storage_subnet01 fixed_ip: 172.18.0.24 - network: storage_mgmt subnet: storage_mgmt_subnet01 fixed_ip: 172.19.0.129 - network: tenant subnet: tenant_subnet01 fixed_ip: 172.16.0.11 - name: Compute count: 2 hostname_format: compute-%index% defaults: [ ... ]",
"openstack overcloud node provision --stack overcloud --network-config --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"openstack overcloud ceph spec --stack overcloud /home/stack/templates/overcloud-baremetal-deployed.yaml -o ceph_spec_host.yaml",
"openstack overcloud ceph user enable --stack overcloud ceph_spec_host.yaml",
"sudo cephadm shell -- ceph orch host add controller-3 <IP_ADDRESS> <LABELS> 192.168.24.31 _admin mon mgr Inferring fsid 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31 Using recent ceph image undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph@sha256:3075e8708792ebd527ca14849b6af4a11256a3f881ab09b837d7af0f8b2102ea Added host 'controller-3' with addr '192.168.24.31'",
"openstack overcloud deploy --stack overcloud --templates [ -n /home/stack/templates/network_data.yaml \\ ] 1 [ -r /home/stack/templates/roles_data.yaml \\ ] 2 -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vips-deployed.yaml -e /home/stack/templates/bootstrap_node.yaml -e [ ... ]",
"cephadm shell -- ceph orch ls --export > spec.yml",
"cat spec.yml | sudo cephadm shell -- ceph orch apply -i - Inferring fsid 4cf401f9-dd4c-5cda-9f0a-fa47fbf12b31 Using recent ceph image undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph@sha256:3075e8708792ebd527ca14849b6af4a11256a3f881ab09b837d7af0f8b2102ea Scheduled crash update Scheduled mgr update Scheduled mon update Scheduled osd.default_drive_group update Scheduled rgw.rgw update",
"sudo cephadm --ceph status",
"[tripleo-admin@overcloud-controller-0 ~]USD sudo pcs resource refresh galera-bundle [tripleo-admin@overcloud-controller-0 ~]USD sudo pcs resource manage galera-bundle",
"[tripleo-admin@overcloud-controller-0 ~]USD sudo pcs property set stonith-enabled=true",
"[tripleo-admin@overcloud-controller-0 ~]USD sudo pcs status",
"[tripleo-admin@overcloud-controller-0 ~]USD exit",
"source ~/overcloudrc",
"(overcloud) USD openstack network agent list",
"(overcloud) USD for AGENT in USD(openstack network agent list --host overcloud-controller-1.localdomain -c ID -f value) ; do openstack network agent delete USDAGENT ; done",
"(overcloud) USD openstack network agent add router --l3 2d1c1dc1-d9d4-4fa9-b2c8-f29cd1a649d4 r1",
"(overcloud) USD openstack volume service list",
"[tripleo-admin@overcloud-controller-0 ~]USD sudo podman exec -it cinder_api cinder-manage service remove cinder-backup <host> [tripleo-admin@overcloud-controller-0 ~]USD sudo podman exec -it cinder_api cinder-manage service remove cinder-scheduler <host>",
"[tripleo-admin@overcloud-controller-0 ~]USD sudo podman exec -it rabbitmq-bundle-podman-0 bash [root@overcloud-controller-0 /]USD rabbitmqctl cluster_status",
"[root@controller-0 /]USD rabbitmqctl forget_cluster_node <node_name>",
"#!/bin/sh set -xe SRC=\"[email protected]\" ALL=\"[email protected] [email protected] [email protected]\"",
"ssh \"USD{SRC}\" 'sudo tar -czvf - /var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift/{*.builder,*.ring.gz,backups/*.builder}' > swift-rings.tar.gz",
"for DST in USD{ALL}; do cat swift-rings.tar.gz | ssh \"USD{DST}\" 'sudo tar -C / -xvzf -' ssh \"USD{DST}\" 'sudo podman restart swift_copy_rings' ssh \"USD{DST}\" 'sudo systemctl restart tripleo_swift*' done"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_replacing-controller-nodes
|
Chapter 6. Uninstalling a cluster on Nutanix
|
Chapter 6. Uninstalling a cluster on Nutanix You can remove a cluster that you deployed to Nutanix. 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
|
[
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_nutanix/uninstalling-cluster-nutanix
|
Chapter 7. Configuring notifications for inventory events
|
Chapter 7. Configuring notifications for inventory events The Inventory service triggers three types of events: New system registered System deleted Validation error The New system registered and System deleted events trigger when you register a new system in inventory, or when a system is removed from inventory (either manually or when it becomes stale and Insights automatically removes it). Validation error events trigger when data in a payload from insights-client is not valid (corrupted data, incorrect values, or other issues). The validation process follows these steps: insights-client runs on the system and generates a payload. insights-client uploads the payload to the Hybrid Cloud Console. The Hybrid Cloud Console receives the payload and validates it. The validation event triggers at this step. A validation error means that the payload cannot be processed, and that the console and Insights services cannot use its data. You can configure notifications for these events using the Notifications service in the Red Hat Hybrid Cloud Console. The Notifications service enables you to configure responses to these events for your account. You can send email notifications to groups of users, or you can forward events to third-party applications, such as Splunk, ServiceNow, Event-Driven Ansible, Slack, Microsoft Teams, or Google Chat. You can also forward notifications, using a generic webhook with the Integrations service. Note To receive Notifications emails, users must subscribe to email notifications in their user preferences. Users may choose to receive each email notification individually, or they may subscribe to a daily digest email. For more information, refer to Configuring user preferences for email notifications . The New system registered and System deleted events are particularly useful for driving automation, and for integrating Red Hat Insights into your operational workflows. For example, you can configure these events to automatically launch compliance or malware checks, validate systems assignments to Workspaces, update external CMDB records, or continuously monitor your RHEL environment. 7.1. Setting up organization notifications for inventory events Note Make sure that you configure third-party system integrations in the Hybrid Cloud Console, as well as any behavior groups that should receive inventory notifications. For more information about third-party system integrations, refer to Integrating the Red Hat Hybrid Cloud Console with third-party applications . Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as a Notifications Administrator. Procedure Navigate to Settings > Notifications > Configure Events . In the Configuration tab, select the Service filter. Click Filter by service , and then select Inventory from the drop-down list. The inventory events appear in the list of events. Select the event type you want to configure (for example, New system registered ). To configure the event type, click the Edit (pencil) icon at the far right of the event type. A drop-down list displays with the list of available behavior groups configured in your organization. Select the checkboxes to the behavior groups you want to configure for the Inventory event type. When you have finished selecting behavior groups, click the checkmark to the list of behavior groups to save your selections. Additional resources For more information about behavior groups, refer to Configuring notifications on the Red Hat Hybrid Cloud Console . 7.2. Setting up user email notifications for inventory events Note Make sure to configure email preferences for each user to receive email notifications. For more information about how to set up user preferences, refer to Configuring user preferences for email notifications . Prerequisites You are a member of a user group that receives email notifications as part of a behavior group. The behavior group is configured to trigger email notifications for your systems and to send those notifications to the user group to which you belong. Procedure Navigate to Settings > Notifications > Notification Preferences . Select Inventory from the list of services. The list of available notifications for Inventory displays. Click the checkbox to each type of notification you want to receive, or click Select All to receive all notifications for all Inventory events. Click Save . Note Configuring instant notifications might result in a large volume of email messages.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory/configuring-inventory-events_user-access
|
Chapter 2. Installing the OpenShift Serverless Operator
|
Chapter 2. Installing the OpenShift Serverless Operator Installing the OpenShift Serverless Operator enables you to install and use Knative Serving, Knative Eventing, and the Knative broker for Apache Kafka on a OpenShift Container Platform cluster. The OpenShift Serverless Operator manages Knative custom resource definitions (CRDs) for your cluster and enables you to configure them without directly modifying individual config maps for each component. 2.1. OpenShift Serverless Operator resource requirements The following sample setup might help you to estimate the minimum resource requirements for your OpenShift Serverless Operator installation. Your specific requirements might be significantly different, and might grow as your use of OpenShift Serverless increases. The test suite used in the sample setup has the following parameters: An OpenShift Container Platform cluster with: 10 workers (8 vCPU, 16GiB memory) 3 workers dedicated to Kafka 2 workers dedicated to Prometheus 5 workers remaining for both Serverless and test deployments 89 test scenarios running in parallel, mainly focused on using the control plane. Testing scenarios typically have a Knative Service sending events through an in-memory channel, a Kafka channel, an in-memory broker, or a Kafka broker to either a Deployment or a Knative Service. 48 re-creation scenarios, where the testing scenario is repeatedly being deleted and re-created. 41 stable scenarios, where events are sent throughout the test run slowly but continuously. The test setup contains, in total: 170 Knative Services 20 In-Memory Channels 24 Kafka Channels 52 Subscriptions 42 Brokers 68 Triggers The following table details the minimal resource requirements for a Highly-Available (HA) setup discovered by the test suite: Component RAM resources CPU resources OpenShift Serverless Operator 1GB 0.2 core Knative Serving 5GB 2.5 cores Knative Eventing 2GB 0.5 core Knative broker for Apache Kafka 6GB 1 core Sum 14GB 4.2 cores The following table details the minimal resource requirements for a non-HA setup discovered by the test suite: Component RAM resources CPU resources OpenShift Serverless Operator 1GB 0.2 core Knative Serving 2.5GB 1.2 cores Knative Eventing 1GB 0.2 core Knative broker for Apache Kafka 6GB 1 core Sum 10.5GB 2.6 cores 2.2. Installing the OpenShift Serverless Operator from the web console You can install the OpenShift Serverless Operator from the OperatorHub by using the OpenShift Container Platform web console. Installing this Operator enables you to install and use Knative components. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. For OpenShift Container Platform, your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually. You have logged in to the web console. Procedure In the web console, navigate to the Operators OperatorHub page. Scroll, or type the keyword Serverless into the Filter by keyword box to find the OpenShift Serverless Operator. Review the information about the Operator and click Install . On the Install Operator page: The Installation Mode is All namespaces on the cluster (default) . This mode installs the Operator in the default openshift-serverless namespace to watch and be made available to all namespaces in the cluster. The Installed Namespace is openshift-serverless . Select an Update Channel . The stable channel enables installation of the latest stable release of the OpenShift Serverless Operator. The stable channel is the default. To install another version, specify the corresponding stable-x.y channel, for example stable-1.29 . Select the stable channel as the Update Channel . The stable channel will enable installation of the latest stable release of the OpenShift Serverless Operator. Select Automatic or Manual approval strategy. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. From the Catalog Operator Management page, you can monitor the OpenShift Serverless Operator subscription's installation and upgrade progress. If you selected a Manual approval strategy, the subscription's upgrade status will remain Upgrading until you review and approve its install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. Verification After the Subscription's upgrade status is Up to date , select Catalog Installed Operators to verify that the OpenShift Serverless Operator eventually shows up and its Status ultimately resolves to InstallSucceeded in the relevant namespace. If it does not: Switch to the Catalog Operator Management page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Check the logs in any pods in the openshift-serverless project on the Workloads Pods page that are reporting issues to troubleshoot further. Important If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless , you must install and configure Red Hat OpenShift distributed tracing before you install Knative Serving or Knative Eventing. 2.3. Installing the OpenShift Serverless Operator from the CLI You can install the OpenShift Serverless Operator from the OperatorHub by using the CLI. Installing this Operator enables you to install and use Knative components. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. For OpenShift Container Platform, your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually. You have logged in to the cluster. Procedure Create a YAML file containing Namespace , OperatorGroup , and Subscription objects to subscribe a namespace to the OpenShift Serverless Operator. For example, create the file serverless-subscription.yaml with the following content: Example subscription --- apiVersion: v1 kind: Namespace metadata: name: openshift-serverless --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: serverless-operators namespace: openshift-serverless spec: {} --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: serverless-operator namespace: openshift-serverless spec: channel: stable 1 name: serverless-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4 1 The channel name of the Operator. The stable channel enables installation of the most recent stable version of the OpenShift Serverless Operator. To install another version, specify the corresponding stable-x.y channel, for example stable-1.29 . 2 The name of the Operator to subscribe to. For the OpenShift Serverless Operator, this is always serverless-operator . 3 The name of the CatalogSource that provides the Operator. Use redhat-operators for the default OperatorHub catalog sources. 4 The namespace of the CatalogSource. Use openshift-marketplace for the default OperatorHub catalog sources. Create the Subscription object: Verification Check that the cluster service version (CSV) has reached the Succeeded phase: Example command USD oc get csv Example output NAME DISPLAY VERSION REPLACES PHASE serverless-operator.v1.25.0 Red Hat OpenShift Serverless 1.25.0 serverless-operator.v1.24.0 Succeeded Important If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless , you must install and configure Red Hat OpenShift distributed tracing before you install Knative Serving or Knative Eventing. 2.4. Global configuration The OpenShift Serverless Operator manages the global configuration of a Knative installation, including propagating values from the KnativeServing and KnativeEventing custom resources to system config maps . Any updates to config maps which are applied manually are overwritten by the Operator. However, modifying the Knative custom resources allows you to set values for these config maps. Knative has multiple config maps that are named with the prefix config- . All Knative config maps are created in the same namespace as the custom resource that they apply to. For example, if the KnativeServing custom resource is created in the knative-serving namespace, all Knative Serving config maps are also created in this namespace. The spec.config in the Knative custom resources have one <name> entry for each config map, named config-<name> , with a value which is be used for the config map data . 2.5. Additional resources for OpenShift Container Platform Managing resources from custom resource definitions Understanding persistent storage Configuring a custom PKI 2.6. steps After the OpenShift Serverless Operator is installed, you can install Knative Serving or install Knative Eventing .
|
[
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-serverless --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: serverless-operators namespace: openshift-serverless spec: {} --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: serverless-operator namespace: openshift-serverless spec: channel: stable 1 name: serverless-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f serverless-subscription.yaml",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE serverless-operator.v1.25.0 Red Hat OpenShift Serverless 1.25.0 serverless-operator.v1.24.0 Succeeded"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/installing_openshift_serverless/install-serverless-operator
|
Chapter 14. Using Cruise Control to modify topic replication factor
|
Chapter 14. Using Cruise Control to modify topic replication factor Make requests to the /topic_configuration endpoint of the Cruise Control REST API to modify topic configurations, including the replication factor. Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. You have configured Cruise Control . You have deployed the Cruise Control Metrics Reporter . Procedure Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number> To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state' Run the bin/kafka-topics.sh command with the --describe option and to check the current replication factor of the target topic: /opt/kafka/bin/kafka-topics.sh \ --bootstrap-server localhost:9092 \ --topic <topic_name> \ --describe Update the replication factor for the topic: curl -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/topic_configuration?topic=<topic_name>&replication_factor=<new_replication_factor>&dryrun=false' For example, curl -X POST 'localhost:9090/kafkacruisecontrol/topic_configuration?topic=topic1&replication_factor=3&dryrun=false' . Run the bin/kafka-topics.sh command with the --describe option, as before, to see the results of the change to the topic.
|
[
"cd /opt/cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number>",
"curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state'",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic <topic_name> --describe",
"curl -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/topic_configuration?topic=<topic_name>&replication_factor=<new_replication_factor>&dryrun=false'"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/proc-cc-topic-replication-str
|
5.3.3. Adding a Member to a Running Cluster
|
5.3.3. Adding a Member to a Running Cluster To add a member to a running cluster, follow the steps in this section. From the cluster-specific page, click Nodes along the top of the cluster display. This displays the nodes that constitute the cluster. This is also the default page that appears when you click on the cluster name beneath Manage Clusters from the menu on the left side of the luci Homebase page. Click Add . Clicking Add causes the display of the Add Nodes To Cluster dialog box. Enter the node name in the Node Hostname text box. After you have entered the node name, the node name is reused as the ricci host name; you can override this if you are communicating with ricci on an address that is different from the address to which the cluster node name resolves. As of Red Hat Enterprise Linux 6.9, after you have entered the node name and, if necessary, adjusted the ricci host name, the fingerprint of the certificate of the ricci host is displayed for confirmation. You can verify whether this matches the expected fingerprint. If it is legitimate, enter the ricci password. Important It is strongly advised that you verify the certificate fingerprint of the ricci server you are going to authenticate against. Providing an unverified entity on the network with the ricci password may constitute a confidentiality breach, and communication with an unverified entity may cause an integrity breach. If you are using a different port for the ricci agent than the default of 11111, change this parameter to the port you are using. Check the Enable Shared Storage Support check box if clustered storage is required to download the packages that support clustered storage and enable clustered LVM; you should select this only when you have access to the Resilient Storage Add-On or the Scalable File System Add-On. If you want to add more nodes, click Add Another Node and enter the node name and password for the each additional node. Click Add Nodes . Clicking Add Nodes causes the following actions: If you have selected Download Packages , the cluster software packages are downloaded onto the nodes. Cluster software is installed onto the nodes (or it is verified that the appropriate software packages are installed). The cluster configuration file is updated and propagated to each node in the cluster - including the added node. The added node joins the cluster. The Nodes page appears with a message indicating that the node is being added to the cluster. Refresh the page to update the status. When the process of adding a node is complete, click on the node name for the newly-added node to configure fencing for this node, as described in Section 4.6, "Configuring Fence Devices" . Note When you add a node to a cluster that uses UDPU transport, you must restart all nodes in the cluster for the change to take effect.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-add-member-running-conga-ca
|
AMQ Streams on OpenShift Overview
|
AMQ Streams on OpenShift Overview Red Hat Streams for Apache Kafka 2.5 Discover the features and functions of AMQ Streams 2.5 on OpenShift Container Platform
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_on_openshift_overview/index
|
Chapter 5. Enabling kdump
|
Chapter 5. Enabling kdump For your RHEL 9 systems, you can configure enabling or disabling the kdump functionality on a specific kernel or on all installed kernels. However, you must routinely test the kdump functionality and validate its working status. 5.1. Enabling kdump for all installed kernels The kdump service starts by enabling kdump.service after the kexec tool is installed. You can enable and start the kdump service for all kernels installed on the machine. Prerequisites You have administrator privileges. Procedure Add the crashkernel= command-line parameter to all installed kernels: xxM is the required memory in megabytes. Enable the kdump service: Verification Check that the kdump service is running: 5.2. Enabling kdump for a specific installed kernel You can enable the kdump service for a specific kernel on the machine. Prerequisites You have administrator privileges. Procedure List the kernels installed on the machine. Add a specific kdump kernel to the system's Grand Unified Bootloader (GRUB) configuration. For example: xxM is the required memory reserve in megabytes. Enable the kdump service. Verification Check that the kdump service is running. 5.3. Disabling the kdump service You can stop the kdump.service and disable the service from starting on your RHEL 9 systems. Prerequisites Fulfilled requirements for kdump configurations and targets. For details, see Supported kdump configurations and targets . All configurations for installing kdump are set up according to your needs. For details, see Installing kdump . Procedure To stop the kdump service in the current session: To disable the kdump service: Warning It is recommended to set kptr_restrict=1 as default. When kptr_restrict is set to (1) as default, the kdumpctl service loads the crash kernel regardless of whether the Kernel Address Space Layout ( KASLR ) is enabled. If kptr_restrict is not set to 1 and KASLR is enabled, the contents of /proc/kore file are generated as all zeros. The kdumpctl service fails to access the /proc/kcore file and load the crash kernel. The kexec-kdump-howto.txt file displays a warning message, which recommends you to set kptr_restrict=1 . Verify for the following in the sysctl.conf file to ensure that kdumpctl service loads the crash kernel: Kernel kptr_restrict=1 in the sysctl.conf file. Additional resources Managing systemd
|
[
"grubby --update-kernel=ALL --args=\"crashkernel=xxM\"",
"systemctl enable --now kdump.service",
"systemctl status kdump.service β kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)",
"ls -a /boot/vmlinuz- * /boot/vmlinuz-0-rescue-2930657cd0dc43c2b75db480e5e5b4a9 /boot/vmlinuz-4.18.0-330.el8.x86_64 /boot/vmlinuz-4.18.0-330.rt7.111.el8.x86_64",
"grubby --update-kernel= vmlinuz-4.18.0-330.el8.x86_64 --args=\"crashkernel= xxM \"",
"systemctl enable --now kdump.service",
"systemctl status kdump.service β kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)",
"systemctl stop kdump.service",
"systemctl disable kdump.service"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/installing_rhel_9_for_real_time/enabling-kdumpinstalling-rhel-9-for-real-time
|
4.53. e2fsprogs
|
4.53. e2fsprogs 4.53.1. RHBA-2011:1735 - e2fsprogs bug fix and enhancement update Updated e2fsprogs packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The e2fsprogs packages contain a number of utilities that create, check, modify, and correct inconsistencies in ext2, ext3, and ext4 file systems. This includes e2fsck (which repairs file system inconsistencies after an unclean shutdown), mke2fs (which initializes a partition to contain an empty file system), tune2fs (which modifies file system parameters), and most of the other core file system utilities. Bug Fixes BZ# 676465 Running the "e2fsck" command on certain corrupted file systems failed to correct all errors during the first run. This occurred when a file had its xattr block cloned as a duplicate, but the block was later removed from the file because the file system did not contain the xattr feature. However, the block was not cleared from the block bitmaps. During the second run, e2fsck found the cloned xattr block as in use, but not owned by any file, and had to repair the block bitmaps. With this update, the processing of duplicate xattr blocks is skipped on non-xattr file systems. All problems are now discovered during the first run. BZ# 679931 On certain devices with very large physical sector size, the mke2fs utility set the block size to be as large as the size of the physical sector. In some cases, the size of the physical sector was larger than the page size. As a consequence, the file system could not be mounted and, in rare cases, the utility could even fail. With this update, the default block size is not set to be larger than the system's page size, even for large physical sector devices. BZ# 683906 Previously, multiple manual pages contained typos. These typos have been corrected with this update. BZ# 713475 This update modifies parameters of the "mke2fs" command to be consistent with the "discard" and "nodiscard" mount options for all system tools (like mount, fsck, or mkfs). The user is now also informed about the ongoing discard process. BZ# 730083 Previously, the libcomm_err libraries were built without the read-only relocation (RELRO) flag. Programs built against these libraries could be vulnerable to various attacks based on overwriting the ELF section of a program. To enhance the security, the e2fsprogs package is now provided with partial RELRO support. Enhancements BZ# 679892 Previously, the tune2fs tool could not set "barrier=0" as the default option on the ext3 and ext4 file systems. With this update, users are now able to set this option when creating the file system, and do not have to maintain the option in the /etc/fstab file across all of the file systems and servers. BZ# 713468 Previously, raw e2image output files could be extremely large sparse files, which were difficult to copy, archive, and transport. This update adds support for exporting images in the qcow format. Images in this format are small and easy to manipulate. Users are advised to upgrade to these updated e2fsprogs packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/e2fsprogs
|
Chapter 11. Adding managed datasources to Data Grid Server
|
Chapter 11. Adding managed datasources to Data Grid Server Optimize connection pooling and performance for JDBC database connections by adding managed datasources to your Data Grid Server configuration. 11.1. Configuring managed datasources Create managed datasources as part of your Data Grid Server configuration to optimize connection pooling and performance for JDBC database connections. You can then specify the JDNI name of the managed datasources in your caches, which centralizes JDBC connection configuration for your deployment. Prerequisites Copy database drivers to the server/lib directory in your Data Grid Server installation. Tip Use the install command with the Data Grid Command Line Interface (CLI) to download the required drivers to the server/lib directory, for example: Procedure Open your Data Grid Server configuration for editing. Add a new data-source to the data-sources section. Uniquely identify the datasource with the name attribute or field. Specify a JNDI name for the datasource with the jndi-name attribute or field. Tip You use the JNDI name to specify the datasource in your JDBC cache store configuration. Set true as the value of the statistics attribute or field to enable statistics for the datasource through the /metrics endpoint. Provide JDBC driver details that define how to connect to the datasource in the connection-factory section. Specify the name of the database driver with the driver attribute or field. Specify the JDBC connection url with the url attribute or field. Specify credentials with the username and password attributes or fields. Provide any other configuration as appropriate. Define how Data Grid Server nodes pool and reuse connections with connection pool tuning properties in the connection-pool section. Save the changes to your configuration. Verification Use the Data Grid Command Line Interface (CLI) to test the datasource connection, as follows: Start a CLI session. List all datasources and confirm the one you created is available. Test a datasource connection. Managed datasource configuration XML <server xmlns="urn:infinispan:server:15.0"> <data-sources> <!-- Defines a unique name for the datasource and JNDI name that you reference in JDBC cache store configuration. Enables statistics for the datasource, if required. --> <data-source name="ds" jndi-name="jdbc/postgres" statistics="true"> <!-- Specifies the JDBC driver that creates connections. --> <connection-factory driver="org.postgresql.Driver" url="jdbc:postgresql://localhost:5432/postgres" username="postgres" password="changeme"> <!-- Sets optional JDBC driver-specific connection properties. --> <connection-property name="name">value</connection-property> </connection-factory> <!-- Defines connection pool tuning properties. --> <connection-pool initial-size="1" max-size="10" min-size="3" background-validation="1000" idle-removal="1" blocking-timeout="1000" leak-detection="10000"/> </data-source> </data-sources> </server> JSON { "server": { "data-sources": [{ "name": "ds", "jndi-name": "jdbc/postgres", "statistics": true, "connection-factory": { "driver": "org.postgresql.Driver", "url": "jdbc:postgresql://localhost:5432/postgres", "username": "postgres", "password": "changeme", "connection-properties": { "name": "value" } }, "connection-pool": { "initial-size": 1, "max-size": 10, "min-size": 3, "background-validation": 1000, "idle-removal": 1, "blocking-timeout": 1000, "leak-detection": 10000 } }] } } YAML server: dataSources: - name: ds jndiName: 'jdbc/postgres' statistics: true connectionFactory: driver: "org.postgresql.Driver" url: "jdbc:postgresql://localhost:5432/postgres" username: "postgres" password: "changeme" connectionProperties: name: value connectionPool: initialSize: 1 maxSize: 10 minSize: 3 backgroundValidation: 1000 idleRemoval: 1 blockingTimeout: 1000 leakDetection: 10000 11.2. Configuring caches with JNDI names When you add a managed datasource to Data Grid Server you can add the JNDI name to a JDBC-based cache store configuration. Prerequisites Configure Data Grid Server with a managed datasource. Procedure Open your cache configuration for editing. Add the data-source element or field to the JDBC-based cache store configuration. Specify the JNDI name of the managed datasource as the value of the jndi-url attribute. Configure the JDBC-based cache stores as appropriate. Save the changes to your configuration. JNDI name in cache configuration XML <distributed-cache> <persistence> <jdbc:string-keyed-jdbc-store> <!-- Specifies the JNDI name of a managed datasource on Data Grid Server. --> <jdbc:data-source jndi-url="jdbc/postgres"/> <jdbc:string-keyed-table drop-on-exit="true" create-on-start="true" prefix="TBL"> <jdbc:id-column name="ID" type="VARCHAR(255)"/> <jdbc:data-column name="DATA" type="BYTEA"/> <jdbc:timestamp-column name="TS" type="BIGINT"/> <jdbc:segment-column name="S" type="INT"/> </jdbc:string-keyed-table> </jdbc:string-keyed-jdbc-store> </persistence> </distributed-cache> JSON { "distributed-cache": { "persistence": { "string-keyed-jdbc-store": { "data-source": { "jndi-url": "jdbc/postgres" }, "string-keyed-table": { "prefix": "TBL", "drop-on-exit": true, "create-on-start": true, "id-column": { "name": "ID", "type": "VARCHAR(255)" }, "data-column": { "name": "DATA", "type": "BYTEA" }, "timestamp-column": { "name": "TS", "type": "BIGINT" }, "segment-column": { "name": "S", "type": "INT" } } } } } } YAML distributedCache: persistence: stringKeyedJdbcStore: dataSource: jndi-url: "jdbc/postgres" stringKeyedTable: prefix: "TBL" dropOnExit: true createOnStart: true idColumn: name: "ID" type: "VARCHAR(255)" dataColumn: name: "DATA" type: "BYTEA" timestampColumn: name: "TS" type: "BIGINT" segmentColumn: name: "S" type: "INT" 11.3. Connection pool tuning properties You can tune JDBC connection pools for managed datasources in your Data Grid Server configuration. Property Description initial-size Initial number of connections the pool should hold. max-size Maximum number of connections in the pool. min-size Minimum number of connections the pool should hold. blocking-timeout Maximum time in milliseconds to block while waiting for a connection before throwing an exception. This will never throw an exception if creating a new connection takes an inordinately long period of time. Default is 0 meaning that a call will wait indefinitely. background-validation Time in milliseconds between background validation runs. A duration of 0 means that this feature is disabled. validate-on-acquisition Connections idle for longer than this time, specified in milliseconds, are validated before being acquired (foreground validation). A duration of 0 means that this feature is disabled. idle-removal Time in minutes a connection has to be idle before it can be removed. leak-detection Time in milliseconds a connection has to be held before a leak warning.
|
[
"install org.postgresql:postgresql:42.4.3",
"bin/cli.sh",
"server datasource ls",
"server datasource test my-datasource",
"<server xmlns=\"urn:infinispan:server:15.0\"> <data-sources> <!-- Defines a unique name for the datasource and JNDI name that you reference in JDBC cache store configuration. Enables statistics for the datasource, if required. --> <data-source name=\"ds\" jndi-name=\"jdbc/postgres\" statistics=\"true\"> <!-- Specifies the JDBC driver that creates connections. --> <connection-factory driver=\"org.postgresql.Driver\" url=\"jdbc:postgresql://localhost:5432/postgres\" username=\"postgres\" password=\"changeme\"> <!-- Sets optional JDBC driver-specific connection properties. --> <connection-property name=\"name\">value</connection-property> </connection-factory> <!-- Defines connection pool tuning properties. --> <connection-pool initial-size=\"1\" max-size=\"10\" min-size=\"3\" background-validation=\"1000\" idle-removal=\"1\" blocking-timeout=\"1000\" leak-detection=\"10000\"/> </data-source> </data-sources> </server>",
"{ \"server\": { \"data-sources\": [{ \"name\": \"ds\", \"jndi-name\": \"jdbc/postgres\", \"statistics\": true, \"connection-factory\": { \"driver\": \"org.postgresql.Driver\", \"url\": \"jdbc:postgresql://localhost:5432/postgres\", \"username\": \"postgres\", \"password\": \"changeme\", \"connection-properties\": { \"name\": \"value\" } }, \"connection-pool\": { \"initial-size\": 1, \"max-size\": 10, \"min-size\": 3, \"background-validation\": 1000, \"idle-removal\": 1, \"blocking-timeout\": 1000, \"leak-detection\": 10000 } }] } }",
"server: dataSources: - name: ds jndiName: 'jdbc/postgres' statistics: true connectionFactory: driver: \"org.postgresql.Driver\" url: \"jdbc:postgresql://localhost:5432/postgres\" username: \"postgres\" password: \"changeme\" connectionProperties: name: value connectionPool: initialSize: 1 maxSize: 10 minSize: 3 backgroundValidation: 1000 idleRemoval: 1 blockingTimeout: 1000 leakDetection: 10000",
"<distributed-cache> <persistence> <jdbc:string-keyed-jdbc-store> <!-- Specifies the JNDI name of a managed datasource on Data Grid Server. --> <jdbc:data-source jndi-url=\"jdbc/postgres\"/> <jdbc:string-keyed-table drop-on-exit=\"true\" create-on-start=\"true\" prefix=\"TBL\"> <jdbc:id-column name=\"ID\" type=\"VARCHAR(255)\"/> <jdbc:data-column name=\"DATA\" type=\"BYTEA\"/> <jdbc:timestamp-column name=\"TS\" type=\"BIGINT\"/> <jdbc:segment-column name=\"S\" type=\"INT\"/> </jdbc:string-keyed-table> </jdbc:string-keyed-jdbc-store> </persistence> </distributed-cache>",
"{ \"distributed-cache\": { \"persistence\": { \"string-keyed-jdbc-store\": { \"data-source\": { \"jndi-url\": \"jdbc/postgres\" }, \"string-keyed-table\": { \"prefix\": \"TBL\", \"drop-on-exit\": true, \"create-on-start\": true, \"id-column\": { \"name\": \"ID\", \"type\": \"VARCHAR(255)\" }, \"data-column\": { \"name\": \"DATA\", \"type\": \"BYTEA\" }, \"timestamp-column\": { \"name\": \"TS\", \"type\": \"BIGINT\" }, \"segment-column\": { \"name\": \"S\", \"type\": \"INT\" } } } } } }",
"distributedCache: persistence: stringKeyedJdbcStore: dataSource: jndi-url: \"jdbc/postgres\" stringKeyedTable: prefix: \"TBL\" dropOnExit: true createOnStart: true idColumn: name: \"ID\" type: \"VARCHAR(255)\" dataColumn: name: \"DATA\" type: \"BYTEA\" timestampColumn: name: \"TS\" type: \"BIGINT\" segmentColumn: name: \"S\" type: \"INT\""
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/managed-datasources
|
17.6. Establishing a Token Ring Connection
|
17.6. Establishing a Token Ring Connection A token ring network is a network in which all the computers are connected in a circular pattern. A token , or a special network packet, travels around the token ring and allows computers to send information to each other. Note For more information on using token rings under Linux, refer to the Linux Token Ring Project website available at http://www.linuxtr.net/ . To add a token ring connection, follow these steps: Click the Devices tab. Click the New button on the toolbar. Select Token Ring connection from the Device Type list and click Forward . If you have already added the token ring card to the hardware list, select it from the Tokenring card list. Otherwise, select Other Tokenring Card to add the hardware device. If you selected Other Tokenring Card , the Select Token Ring Adapter window as shown in Figure 17.10, "Token Ring Settings" appears. Select the manufacturer and model of the adapter. Select the device name. If this is the system's first token ring card, select tr0 ; if this is the second token ring card, select tr1 (and so on). The Network Administration Tool also allows the user to configure the resources for the adapter. Click Forward to continue. Figure 17.10. Token Ring Settings On the Configure Network Settings page, choose between DHCP and static IP address. You may specify a hostname for the device. If the device receives a dynamic IP address each time the network is started, do not specify a hostname. Click Forward to continue. Click Apply on the Create Tokenring Device page. After configuring the token ring device, it appears in the device list as shown in Figure 17.11, "Token Ring Device" . Figure 17.11. Token Ring Device Be sure to select File => Save to save the changes. After adding the device, you can edit its configuration by selecting the device from the device list and clicking Edit . For example, you can configure whether the device is started at boot time. When the device is added, it is not activated immediately, as seen by its Inactive status. To activate the device, select it from the device list, and click the Activate button. If the system is configured to activate the device when the computer starts (the default), this step does not have to be performed again.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-network-config-tokenring
|
1.2. Image Builder terminology
|
1.2. Image Builder terminology Blueprints Blueprints define customized system images by listing packages and customizations that will be part of the system. Blueprints can be edited and they are versioned. When a system image is created from a blueprint, the image is associated with the blueprint in the Image Builder interface of the RHEL 7 web console. Blueprints are presented to the user as plain text in the Tom's Obvious, Minimal Language (TOML) format. Compose Composes are individual builds of a system image, based on a particular version of a particular blueprint. Compose as a term refers to the system image, the logs from its creation, inputs, metadata, and the process itself. Customization Customizations are specifications for the system, which are not packages. This includes user accounts, groups, kernel, timezone, locale, firewall and ssh key.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-documentation-image_builder-test_chapter-test_section_2
|
Chapter 3. Editing kubelet log level verbosity and gathering logs
|
Chapter 3. Editing kubelet log level verbosity and gathering logs To troubleshoot some issues with nodes, establish the kubelet's log level verbosity depending on the issue to be tracked. 3.1. Modifying the kubelet as a one-time scenario To modify the kubelet in a one-time scenario without rebooting the node due to the change of machine-config(spec":{"paused":false}}) , allowing you to modify the kubelet without affecting the service, follow this procedure. Procedure Connect to the node in debug mode: USD oc debug node/<node> USD chroot /host Alternatively, it is possible to SSH to the node and become root. After access is established, check the default log level: USD systemctl cat kubelet Example output # /etc/systemd/system/kubelet.service.d/20-logging.conf [Service] Environment="KUBELET_LOG_LEVEL=2" Define the new verbosity required in a new /etc/systemd/system/kubelet.service.d/30-logging.conf file, which overrides /etc/systemd/system/kubelet.service.d/20-logging.conf . In this example, the verbosity is changed from 2 to 8 : USD echo -e "[Service]\nEnvironment=\"KUBELET_LOG_LEVEL=8\"" > /etc/systemd/system/kubelet.service.d/30-logging.conf Reload systemd and restart the service: USD systemctl daemon-reload USD systemctl restart kubelet Gather the logs, and then revert the log level increase: USD rm -f /etc/systemd/system/kubelet.service.d/30-logging.conf USD systemctl daemon-reload USD systemctl restart kubelet 3.2. Persistent kubelet log level configuration Procedure Use the following MachineConfig object for persistent kubelet log level configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-master-kubelet-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - name: kubelet.service enabled: true dropins: - name: 30-logging.conf contents: | [Service] Environment="KUBELET_LOG_LEVEL=2" Generally, it is recommended to apply 0-4 as debug-level logs and 5-8 as trace-level logs. 3.3. Log verbosity descriptions Log verbosity Description --v=0 Always visible to an Operator. --v=1 A reasonable default log level if you do not want verbosity. --v=2 Useful steady state information about the service and important log messages that might correlate to significant changes in the system. This is the recommended default log level. --v=3 Extended information about changes. --v=4 Debug level verbosity. --v=6 Display requested resources. --v=7 Display HTTP request headers. --v=8 Display HTTP request contents. 3.4. Gathering kubelet logs Procedure After the kubelet's log level verbosity is configured properly, you can gather logs by running the following commands: USD oc adm node-logs --role master -u kubelet USD oc adm node-logs --role worker -u kubelet Alternatively, inside the node, run the following command: USD journalctl -b -f -u kubelet.service To collect master container logs, run the following command: USD sudo tail -f /var/log/containers/* To directly gather the logs of all nodes, run the following command: - for n in USD(oc get node --no-headers | awk '{print USD1}'); do oc adm node-logs USDn | gzip > USDn.log.gz; done
|
[
"oc debug node/<node>",
"chroot /host",
"systemctl cat kubelet",
"/etc/systemd/system/kubelet.service.d/20-logging.conf [Service] Environment=\"KUBELET_LOG_LEVEL=2\"",
"echo -e \"[Service]\\nEnvironment=\\\"KUBELET_LOG_LEVEL=8\\\"\" > /etc/systemd/system/kubelet.service.d/30-logging.conf",
"systemctl daemon-reload",
"systemctl restart kubelet",
"rm -f /etc/systemd/system/kubelet.service.d/30-logging.conf",
"systemctl daemon-reload",
"systemctl restart kubelet",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-master-kubelet-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - name: kubelet.service enabled: true dropins: - name: 30-logging.conf contents: | [Service] Environment=\"KUBELET_LOG_LEVEL=2\"",
"oc adm node-logs --role master -u kubelet",
"oc adm node-logs --role worker -u kubelet",
"journalctl -b -f -u kubelet.service",
"sudo tail -f /var/log/containers/*",
"- for n in USD(oc get node --no-headers | awk '{print USD1}'); do oc adm node-logs USDn | gzip > USDn.log.gz; done"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/api_overview/editing-kubelet-log-level-verbosity
|
Appendix A. Using your subscription
|
Appendix A. Using your subscription Debezium is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing your account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a subscription Go to access.redhat.com . Navigate to Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading zip and tar files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page . Scroll down to INTEGRATION AND AUTOMATION . Click the name of a component to display a list of the artifacts that are available to download. Click the Download link for the artifact that you want. Revised on 2024-10-09 03:04:12 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/installing_debezium_on_openshift/using_your_subscription
|
Chapter 6. Additional security privileges granted for kubevirt-controller and virt-launcher
|
Chapter 6. Additional security privileges granted for kubevirt-controller and virt-launcher The kubevirt-controller and virt-launcher pods are granted some SELinux policies and Security Context Constraints privileges that are in addition to typical pod owners. These privileges enable virtual machines to use OpenShift Virtualization features. 6.1. Extended SELinux policies for virt-launcher pods The container_t SELinux policy for virt-launcher pods is extended to enable essential functions of OpenShift Virtualization. The following policy is required for network multi-queue, which enables network performance to scale as the number of available vCPUs increases: allow process self (tun_socket (relabelfrom relabelto attach_queue)) The following policy allows virt-launcher to read files under the /proc directory, including /proc/cpuinfo and /proc/uptime : allow process proc_type (file (getattr open read)) The following policy allows libvirtd to relay network-related debug messages. allow process self (netlink_audit_socket (nlmsg_relay)) Note Without this policy, any attempt to relay network debug messages is blocked. This might fill the node's audit logs with SELinux denials. The following policies allow libvirtd to access hugetblfs , which is required to support huge pages: allow process hugetlbfs_t (dir (add_name create write remove_name rmdir setattr)) allow process hugetlbfs_t (file (create unlink)) The following policies allow virtiofs to mount filesystems and access NFS: allow process nfs_t (dir (mounton)) allow process proc_t (dir (mounton)) allow process proc_t (filesystem (mount unmount)) 6.2. Additional OpenShift Container Platform security context constraints and Linux capabilities for the kubevirt-controller service account Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. The kubevirt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These virt-launcher pods are granted permissions by the kubevirt-controller service account. 6.2.1. Additional SCCs granted to the kubevirt-controller service account The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to take advantage of OpenShift Virtualization features that are beyond the scope of typical pods. The kubevirt-controller service account is granted the following SCCs: scc.AllowHostDirVolumePlugin = true This allows virtual machines to use the hostpath volume plugin. scc.AllowPrivilegedContainer = false This ensures the virt-launcher pod is not run as a privileged container. scc.AllowedCapabilities = []corev1.Capability{"NET_ADMIN", "NET_RAW", "SYS_NICE"} This provides the following additional Linux capabilities NET_ADMIN , NET_RAW , and SYS_NICE . 6.2.2. Viewing the SCC and RBAC definitions for the kubevirt-controller You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool: USD oc get scc kubevirt-controller -o yaml You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool: USD oc get clusterrole kubevirt-controller -o yaml 6.3. Additional resources Managing security context constraints Using RBAC to define and apply permissions Optimizing virtual machine network performance in the Red Hat Enterprise Linux (RHEL) documentation Using huge pages with virtual machines Configuring huge pages in the RHEL documentation
|
[
"oc get scc kubevirt-controller -o yaml",
"oc get clusterrole kubevirt-controller -o yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/virtualization/virt-additional-security-privileges-controller-and-launcher
|
Chapter 2. Red Hat Enterprise Linux Atomic Host 7.8.3
|
Chapter 2. Red Hat Enterprise Linux Atomic Host 7.8.3 2.1. Atomic Host OStree update : New Tree Version: 7.8.3 (hash: dfd383553c0f25c503272b0d193ac863f2deede0fa69278391bd2b1e6d02b56a) Changes since Tree Version 7.8.2 (hash: 48c78ed67690eff2c0ab803c163753ea3d3f00ec001a05f37b41eb6d71b463ab) 2.1.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Universal Base Image 7 Init Container Image (rhel7/ubi7-init) 2.1.2. Last release This is the last release of Red Hat Enterprise Linux Atomic Host.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_8_3
|
Chapter 2. Release notes
|
Chapter 2. Release notes 2.1. Red Hat OpenShift support for Windows Containers release notes 2.1.1. Release notes for Red Hat Windows Machine Config Operator 10.17.0 This release of the WMCO provides bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.17.0 were released in RHSA-2024:TBD . 2.1.1.1. New features and improvements 2.1.1.1.1. Kubernetes upgrade The WMCO now uses Kubernetes 1.30. 2.1.1.2. Bug fixes Previously, if a Windows VM had its PowerShell ExecutionPolicy set to Restricted , the Windows Instance Config Daemon (WICD) could not run the commands on that VM that are necessary for creating Windows nodes. With this fix, the WICD now bypasses the execution policy on the VM when running PowerShell commands. As a result, the WICD can create Windows nodes on the VM as expected. ( OCPBUGS-30995 ) Previously, if reverse DNS lookup failed due to an error, such as the reverse DNS lookup services being unavailable, the WMCO would not fall back to using the VM hostname to determine if a certificate signing requests (CSR) should be approved. As a consequence, Bring-Your-Own-Host (BYOH) Windows nodes configured with an IP address would not become available. With this fix, BYOH nodes are properly added if reverse DNS is not available. ( OCPBUGS-36643 ) Previously, if there were multiple service account token secrets in the WMCO namespace, scaling Windows nodes would fail. With this fix, the WMCO uses only the secret it creates, ignoring any other service account token secrets in the WMCO namespace. As a result, Windows nodes scale properly. ( OCPBUGS-29253 ) 2.2. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator (WMCO). See the vSphere documentation for any information that is relevant to only that platform. 2.2.1. WMCO supported installation method The WMCO fully supports installing Windows nodes into installer-provisioned infrastructure (IPI) clusters. This is the preferred OpenShift Container Platform installation method. For user-provisioned infrastructure (UPI) clusters, the WMCO supports installing Windows nodes only into a UPI cluster installed with the platform: none field set in the install-config.yaml file (bare-metal or provider-agnostic) and only for the BYOH (Bring Your Own Host) use case. UPI is not supported for any other platform. 2.2.2. WMCO 10.17.0 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 10.17.0, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Nutanix Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.2.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Note The WMCO does not support OVN-Kubernetes without hybrid networking or OpenShift SDN. Dual NIC is not supported on WMCO-managed Windows instances. Table 2.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Google Cloud Platform (GCP) Hybrid networking with OVN-Kubernetes Nutanix Hybrid networking with OVN-Kubernetes Bare metal or provider agnostic Hybrid networking with OVN-Kubernetes Table 2.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 Custom VXLAN port Windows Server 2022, OS Build 20348.681 or later Additional resources Hybrid networking 2.3. Windows Machine Config Operator known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat Insights cost management Red Hat OpenShift Local Dual NIC is not supported on WMCO-managed Windows instances. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States). Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Kubernetes has identified several API compatibility issues .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/release-notes
|
Chapter 1. Device Mapper Multipathing
|
Chapter 1. Device Mapper Multipathing Device Mapper Multipathing (DM-Multipath) allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths. 1.1. Overview of DM-Multipath DM-Multipath can be used to provide: Redundancy DM-Multipath can provide failover in an active/passive configuration. In an active/passive configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable, switch, or controller) fails, DM-Multipath switches to an alternate path. Improved Performance DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and dynamically re-balance the load. Figure 1.1, "Active/Passive Multipath Configuration with One RAID Device" shows an active/passive configuration with two I/O paths from the server to a RAID device. There are 2 HBAs on the server, 2 SAN switches, and 2 RAID controllers. Figure 1.1. Active/Passive Multipath Configuration with One RAID Device In this configuration, there is one I/O path that goes through hba1, SAN1, and controller 1 and a second I/O path that goes through hba2, SAN2, and controller2. There are many points of possible failure in this configuration: HBA failure FC cable failure SAN switch failure Array controller port failure With DM-Multipath configured, a failure at any of these points will cause DM-Multipath to switch to the alternate I/O path. Figure 1.2, "Active/Passive Multipath Configuration with Two RAID Devices" shows a more complex active/passive configuration with 2 HBAs on the server, 2 SAN switches, and 2 RAID devices with 2 RAID controllers each. Figure 1.2. Active/Passive Multipath Configuration with Two RAID Devices As in the example shown in Figure 1.1, "Active/Passive Multipath Configuration with One RAID Device" , there are two I/O paths to each RAID device. With DM-Multipath configured, a failure at any of the points of the I/O path to either of the RAID devices will cause DM-Multipath to switch to the alternate I/O path for that device. Figure 1.3, "Active/Active Multipath Configuration with One RAID Device" shows an active/active configuration with 2 HBAs on the server, 1 SAN switch, and 2 RAID controllers. There are four I/O paths from the server to a storage device: hba1 to controller1 hba1 to controller2 hba2 to controller1 hba2 to controller2 In this configuration, I/O can be spread among those four paths. Figure 1.3. Active/Active Multipath Configuration with One RAID Device
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/MPIO_Overview
|
Appendix A. Revision History
|
Appendix A. Revision History Revision History Revision 1.0-56 Thu May 23 2019 Jiri Herrmann Version for 7.6 GA publication Revision 1.0-55 Thu Oct 25 2018 Jiri Herrmann Version for 7.6 GA publication Revision 1.0-53 Thu Aug 5 2018 Jiri Herrmann Version for 7.6 Beta publication Revision 1.0-52 Thu Apr 5 2018 Jiri Herrmann Version for 7.5 GA publication Revision 1.0-49 Thu Jul 27 2017 Jiri Herrmann Version for 7.4 GA publication Revision 1.0-46 Mon Oct 17 2016 Jiri Herrmann Version for 7.3 GA publication Revision 1.0-44 Mon Dec 21 2015 Laura Novich Republished the guide for several bug fixes Revision 1.0-43 Thu Oct 08 2015 Jiri Herrmann Cleaned up the Revision History Revision 1.0-42 Sun Jun 28 2015 Jiri Herrmann Updated for the 7.2 beta release
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/appe-virtualization_getting_started_guide-revision_history
|
Chapter 14. Hardening the Dashboard service
|
Chapter 14. Hardening the Dashboard service The Dashboard service (horizon) gives users a self-service portal for provisioning their own resources within the limits set by administrators. Manage the security of the Dashboard service with the same sensitivity as the OpenStack APIs. 14.1. Debugging the Dashboard service The default value for the DEBUG parameter is False . Keep the default value in your production environment. Change this setting only during investigation. When you change the value of the DEBUG parameter to True , Django can output stack straces to browser users that contain sensitive web server state information. When the value of the DEBUG parameter is True , the ALLOWED_HOSTS settings are also disabled. For more information on configuring ALLOWED_HOSTS , see Configure ALLOWED_HOSTS . 14.2. Selecting a domain name It is a best practice to deploy the Dashboard service (horizon) to a second level domain, as opposed to a shared domain on any level. Examples of each are provided below: Second level domain: https://example.com Shared subdomain: https://example.public-url.com Deploying the Dashboard service to a dedicated second level domain isolates cookies and security tokens from other domains, based on browsers' same-origin policy. When deployed on a subdomain, the security of the Dashboard service is equivalent to the least secure application deployed on the same second-level domain. You can further mitigate this risk by avoiding a cookie-backed session store, and configuring HTTP Strict Transport Security (HSTS) (described in this guide). Note Deploying the Dashboard service on a bare domain, like https://example/ , is unsupported. 14.3. Configure ALLOWED_HOSTS Horizon is built on the python Django web framework, which requires protection against security threats associated with misleading HTTP Host headers. To apply this protection, configure the ALLOWED_HOSTS setting to use the FQDN that is served by the OpenStack dashboard. When you configure the ALLOWED_HOSTS setting, any HTTP request with a Host header that does not match the values in this list is denied, and an error is raised. Procedure Under parameter_defaults in your templates, set the value of the HorizonAllowedHosts parameter: Replace <value> with the FQDN that is served by the OpenStack dashboard. Deploy the overcloud with the modified template, and all other templates required for your environment. 14.4. Cross Site Scripting (XSS) The OpenStack Dashboard accepts the entire Unicode character set in most fields. Malicious actors can attempt to use this extensibility to test for cross-site scripting (XSS) vulnerabilities. The OpenStack Dashboard service (horizon) has tools that harden against XSS vulnerabilites. It is important to ensure the correct use of these tools in custom dashboards. When you perform an audit against custom dashboards, pay attention to the following: The mark_safe function. is_safe - when used with custom template tags. The safe template tag. Anywhere auto escape is turned off, and any JavaScript which might evaluate improperly escaped data. 14.5. Cross Site Request Forgery (CSRF) Dashboards that use multiple JavaScript instances should be audited for vulnerabilities such as inappropriate use of the @csrf_exempt decorator. Evaluate any dashboard that does not follow recommended security settings before lowering CORS (Cross Origin Resource Sharing) restrictions. Configure your web server to send a restrictive CORS header with each response. Allow only the dashboard domain and protocol, for example: Access-Control-Allow-Origin: https://example.com/ . You should never allow the wild card origin. 14.6. Allow iframe embedding The DISALLOW_IFRAME_EMBED setting disallows Dashboard from being embedded within an iframe. Legacy browsers can still be vulnerable to Cross-Frame Scripting (XFS) vulnerabilities, so this option adds extra security hardening for deployments that do not require iframes. The setting is set to True by default, however it can be disabled using an environment file, if needed. Procedure You can allow iframe embedding using the following parameter: Note These settings should only be set to False once the potential security impacts are fully understood. 14.7. Using HTTPS encryption for Dashboard traffic It is recommended you use HTTPS to encrypt Dashboard traffic. You can do this by configuring it to use a valid, trusted certificate from a recognized certificate authority (CA). Private organization-issued certificates are only appropriate when the root of trust is pre-installed in all user browsers. Configure HTTP requests to the dashboard domain to redirect to the fully qualified HTTPS URL. See Chapter 7, Enabling SSL/TLS on overcloud public endpoints . for more information. 14.8. HTTP Strict Transport Security (HSTS) HTTP Strict Transport Security (HSTS) prevents browsers from making subsequent insecure connections after they have initially made a secure connection. If you have deployed your HTTP services on a public or an untrusted zone, HSTS is especially important. For director-based deployments, this setting is enabled by default in the /usr/share/openstack-tripleo-heat-templates/deployment/horizon/horizon-container-puppet.yaml file: Verification After the overcloud is deployed, check the local_settings file for Red Hat OpenStack Dashboard (horizon) for verification. Use ssh to connect to a controller: USD ssh tripleo-admin@controller-0 Check that the SECURE_PROXY_SSL_HEADER parameter has a value of ('HTTP_X_FORWARDED_PROTO', 'https') : sudo egrep ^SECURE_PROXY_SSL_HEADER /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') 14.9. Front-end caching It is not recommended to use front-end caching tools with the Dashboard, as it renders dynamic content resulting directly from OpenStack API requests. As a result, front-end caching layers such as varnish can prevent the correct content from being displayed. The Dashboard uses Django, which serves static media directly served from the web service and already benefits from web host caching. 14.10. Session backend For director-based deployments, the default session backend for horizon is django.contrib.sessions.backends.cache , which is combined with memcached. This approach is preferred to local-memory cache for performance reasons, is safer for highly-available and load balanced installs, and has the ability to share cache over multiple servers, while still treating it as a single cache. You can review these settings in director's horizon.yaml file: 14.11. Reviewing the secret key The Dashboard depends on a shared SECRET_KEY setting for some security functions. The secret key should be a randomly generated string at least 64 characters long, which must be shared across all active dashboard instances. Compromise of this key might allow a remote attacker to execute arbitrary code. Rotating this key invalidates existing user sessions and caching. Do not commit this key to public repositories. For director deployments, this setting is managed as the HorizonSecret value. 14.12. Configuring session cookies The Dashboard session cookies can be open to interaction by browser technologies, such as JavaScript. For director deployments with TLS everywhere, you can harden this behavior using the HorizonSecureCookies setting. Note Never configure CSRF or session cookies to use a wildcard domain with a leading dot. 14.13. Static media The dashboard's static media should be deployed to a subdomain of the dashboard domain and served by the web server. The use of an external content delivery network (CDN) is also acceptable. This subdomain should not set cookies or serve user-provided content. The media should also be served with HTTPS. Dashboard's default configuration uses django_compressor to compress and minify CSS and JavaScript content before serving it. This process should be statically done before deploying the dashboard, rather than using the default in-request dynamic compression and copying the resulting files along with deployed code or to the CDN server. Compression should be done in a non-production build environment. If this is not practical, consider disabling resource compression entirely. Online compression dependencies (less, Node.js) should not be installed on production machines. 14.14. Validating password complexity The OpenStack Dashboard (horizon) can use a password validation check to enforce password complexity. Procedure Specify a regular expression for password validation, as well as help text to be displayed for failed tests. The following example requires users to create a password of between 8 to 18 characters in length: Apply this change to your deployment. Save the settings as a file called horizon_password.yaml , and then pass it to the overcloud deploy command as follows. The <full environment> indicates that you must still include all of your original deployment parameters. For example: 14.15. Enforce the administrator password check The following setting is set to True by default, however it can be disabled using an environment file, if needed. Note These settings should only be set to False once the potential security impacts are fully understood. Procedure The ENFORCE_PASSWORD_CHECK setting in Dashboard's local_settings.py file displays an Admin Password field on the Change Password form, which helps verify that an administrator is initiating the password change. You can disable ENFORCE_PASSWORD_CHECK using an environment file: 14.16. Disable password reveal The disable_password_reveal parameter is set to True by default, however it can be disabled using an environment file, if needed. The password reveal button allows a user at the Dashboard to view the password they are about to enter. Procedure Under the ControllerExtraConfig parameter, include horizon::disable_password_reveal: false . Save this to a heat environment file and include it with your deployment command. Example Note These settings should only be set to False once the potential security impacts are fully understood. 14.17. Displaying a logon banner for the Dashboard Regulated industries such as HIPAA, PCI-DSS, and the US Government require you to display a user logon banner. The Red Hat OpenStack Platform (RHOSP) dashboard (horizon) uses a default theme (RCUE), which is stored inside the horizon container. Within the custom Dashboard container, you can create a logon banner by manually editing the /usr/share/openstack-dashboard/openstack_dashboard/themes/rcue/templates/auth/login.html file: Procedure Enter the required logon banner just before the {% include 'auth/_login.html' %} section. HTML tags are allowed: The above example produces a dashboard similar to the following: Additional resources Customizing the dashboard 14.18. Limiting the size of file uploads You can optionally configure the dashboard to limit the size of file uploads; this setting might be a requirement for various security hardening policies. LimitRequestBody - This value (in bytes) limits the maximum size of a file that you can transfer using the Dashboard, such as images and other large files. Important This setting has not been formally tested by Red Hat. It is recommended that you thoroughly test the effect of this setting before deploying it to your production environment. Note File uploads will fail if the value is too small. For example, this setting limits each file upload to a maximum size of 10 GB ( 10737418240 ). You will need to adjust this value to suit your deployment. /var/lib/config-data/puppet-generated/horizon/etc/httpd/conf/httpd.conf /var/lib/config-data/puppet-generated/horizon/etc/httpd/conf.d/10-horizon_vhost.conf /var/lib/config-data/puppet-generated/horizon/etc/httpd/conf.d/15-horizon_ssl_vhost.conf Note These configuration files are managed by Puppet, so any unmanaged changes are overwritten whenever you run the openstack overcloud deploy process.
|
[
"parameter_defaults: HorizonAllowedHosts: <value>",
"parameter_defaults: ControllerExtraConfig: horizon::disallow_iframe_embed: false",
"horizon::enable_secure_proxy_ssl_header: true",
"ssh tripleo-admin@controller-0",
"sudo egrep ^SECURE_PROXY_SSL_HEADER /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')",
"horizon::cache_backend: django.core.cache.backends.memcached.MemcachedCache horizon::django_session_engine: 'django.contrib.sessions.backends.cache'",
"parameter_defaults: HorizonPasswordValidator: '^.{8,18}USD' HorizonPasswordValidatorHelp: 'Password must be between 8 and 18 characters.'",
"openstack overcloud deploy --templates -e <full environment> -e horizon_password.yaml",
"parameter_defaults: ControllerExtraConfig: horizon::enforce_password_check: false",
"parameter_defaults: ControllerExtraConfig: horizon::disable_password_reveal: false",
"<snip> <div class=\"container\"> <div class=\"row-fluid\"> <div class=\"span12\"> <div id=\"brand\"> <img src=\"../../static/themes/rcue/images/RHOSP-Login-Logo.svg\"> </div><!--/#brand--> </div><!--/.span*--> <!-- Start of Logon Banner --> <p>Authentication to this information system reflects acceptance of user monitoring agreement.</p> <!-- End of Logon Banner --> {% include 'auth/_login.html' %} </div><!--/.row-fluid-> </div><!--/.container--> {% block js %} {% include \"horizon/_scripts.html\" %} {% endblock %} </body> </html>",
"<Directory /> LimitRequestBody 10737418240 </Directory>",
"<Directory \"/var/www\"> LimitRequestBody 10737418240 </Directory>",
"<Directory \"/var/www\"> LimitRequestBody 10737418240 </Directory>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly_hardening-the-dashboard-service_security_and_hardening
|
2.2.7.2. Python Documentation
|
2.2.7.2. Python Documentation For more information about Python, see man python . You can also install python-docs , which provides HTML manuals and references in the following location: file:///usr/share/doc/python-docs- version /html/index.html For details on library and language components, use pydoc component_name . For example, pydoc math will display the following information about the math Python module: The main site for the Python development project is hosted on python.org .
|
[
"Help on module math: NAME math FILE /usr/lib64/python2.6/lib-dynload/mathmodule.so DESCRIPTION This module is always available. It provides access to the mathematical functions defined by the C standard. FUNCTIONS acos[...] acos(x) Return the arc cosine (measured in radians) of x. acosh[...] acosh(x) Return the hyperbolic arc cosine (measured in radians) of x. asin(...) asin(x) Return the arc sine (measured in radians) of x. asinh[...] asinh(x) Return the hyperbolic arc sine (measured in radians) of x."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/python.docs
|
Chapter 49. EntityTopicOperatorSpec schema reference
|
Chapter 49. EntityTopicOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityTopicOperatorSpec schema properties Configures the Topic Operator. 49.1. logging The Topic Operator has a configurable logger: rootLogger.level The Topic Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.topicOperator field of the Kafka resource Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO logger.top.name: io.strimzi.operator.topic 1 logger.top.level: DEBUG 2 logger.toc.name: io.strimzi.operator.topic.TopicOperator 3 logger.toc.level: TRACE 4 logger.clients.level: DEBUG 5 # ... 1 Creates a logger for the topic package. 2 Sets the logging level for the topic package. 3 Creates a logger for the TopicOperator class. 4 Sets the logging level for the TopicOperator class. 5 Changes the logging level for the default clients logger. The clients logger is part of the logging configuration provided with Streams for Apache Kafka. By default, it is set to INFO . Note When investigating an issue with the operator, it's usually sufficient to change the rootLogger to DEBUG to get more detailed logs. However, keep in mind that setting the log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 49.2. EntityTopicOperatorSpec schema properties Property Property type Description watchedNamespace string The namespace the Topic Operator should watch. image string The image to use for the Topic Operator. reconciliationIntervalSeconds integer Interval between periodic reconciliations. zookeeperSessionTimeoutSeconds integer Timeout for the ZooKeeper session. startupProbe Probe Pod startup checking. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. resources ResourceRequirements CPU and memory resources to reserve. topicMetadataMaxAttempts integer The number of attempts at getting topic metadata. logging InlineLogging , ExternalLogging Logging configuration. jvmOptions JvmOptions JVM Options for pods.
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO logger.top.name: io.strimzi.operator.topic 1 logger.top.level: DEBUG 2 logger.toc.name: io.strimzi.operator.topic.TopicOperator 3 logger.toc.level: TRACE 4 logger.clients.level: DEBUG 5 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties #"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-EntityTopicOperatorSpec-reference
|
Chapter 1. Red Hat OpenShift Service on AWS 4 Documentation
|
Chapter 1. Red Hat OpenShift Service on AWS 4 Documentation Welcome to the official Red Hat OpenShift Service on AWS (ROSA) documentation, where you can learn about ROSA and start exploring its features. To learn about ROSA, interacting with ROSA by using Red Hat OpenShift Cluster Manager and command-line interface (CLI) tools, consumption experience, and integration with Amazon Web Services (AWS) services, start with the Introduction to ROSA documentation . To navigate the ROSA documentation, use the left navigation bar.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/about/welcome-index
|
7.145. mlocate
|
7.145. mlocate 7.145.1. RHBA-2012:1355 - mlocate bug fix update Updated mlocate packages that fix two bugs are now available for Red Hat Enterprise 6. The mlocate packages provide a locate/updatedb implementation. Mlocate keeps a database of all existing files and allows you to look up files by name. Bug Fixes BZ# 690800 Prior to this update, the locate(1) manual page contained a misprint. This update corrects the misprint. BZ# 699363 Prior to this update, the mlocate tool aborted the "updatedb" command if an incorrect filesystem implementation returned a zero-length file name. As a consequence, the locate database was not be updated. This update detects invalid zero-length file names, warns about them, and continues to the locate database. All users of mlocate are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/mlocate
|
18.4. Single File Cache Store
|
18.4. Single File Cache Store Red Hat JBoss Data Grid includes one file system based cache store: the SingleFileCacheStore . The SingleFileCacheStore is a simple, file system based implementation and a replacement to the older file system based cache store: the FileCacheStore . SingleFileCacheStore stores all key/value pairs and their corresponding metadata information in a single file. To speed up data location, it also keeps all keys and the positions of their values and metadata in memory. Hence, using the single file cache store slightly increases the memory required, depending on the key size and the amount of keys stored. Hence SingleFileCacheStore is not recommended for use cases where the keys are too big. To reduce memory consumption, the size of the cache store can be set to a fixed number of entries to store in the file. However, this works only when Infinispan is used as a cache. When Infinispan used this way, data which is not present in Infinispan can be recomputed or re-retrieved from the authoritative data store and stored in Infinispan cache. The reason for this limitation is because once the maximum number of entries is reached, older data in the cache store is removed, so if Infinispan was used as an authoritative data store, it would lead to data loss which is undesirable in this use case Due to its limitations, SingleFileCacheStore can be used in a limited capacity in production environments. It can not be used on shared file system (such as NFS and Windows shares) due to a lack of proper file locking, resulting in data corruption. Furthermore, file systems are not inherently transactional, resulting in file writing failures during the commit phase if the cache is used in a transactional context. Report a bug 18.4.1. Single File Store Configuration (Remote Client-Server Mode) The following is an example of a Single File Store configuration for Red Hat JBoss Data Grid's Remote Client-Server mode: For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.4.2. Single File Store Configuration (Library Mode) In Red Hat JBoss Grid's Library mode, configure a Single File Cache Store as follows:. For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Report a bug 18.4.3. Upgrade JBoss Data Grid Cache Stores Red Hat JBoss Data Grid stores data in a different format than versions of JBoss Data Grid. As a result, the newer version of JBoss Data Grid cannot read data stored by older versions. Use rolling upgrades to upgrade persisted data from the format used by the old JBoss Data Grid to the new format. Additionally, the newer version of JBoss Data Grid also stores persistence configuration information in a different location. Rolling upgrades is the process by which a JBoss Data Grid installation is upgraded without a service shutdown. In Library mode, it refers to a node installation where JBoss Data Grid is running in Library mode. For JBoss Data Grid servers, it refers to the server side components. The upgrade can be due to either hardware or software change such as upgrading JBoss Data Grid. Rolling upgrades are only available in JBoss Data Grid's Remote Client-Server mode. Report a bug
|
[
"<local-cache name=\"default\" statistics=\"true\"> <file-store name=\"myFileStore\" passivation=\"true\" purge=\"true\" relative-to=\"{PATH}\" path=\"{DIRECTORY}\" max-entries=\"10000\" fetch-state=\"true\" preload=\"false\" /> </local-cache>",
"<namedCache name=\"writeThroughToFile\"> <persistence passivation=\"false\"> <singleFile fetchPersistentState=\"true\" ignoreModifications=\"false\" purgeOnStartup=\"false\" shared=\"false\" preload=\"false\" location=\"/tmp/Another-FileCacheStore-Location\" maxEntries=\"100\" maxKeysInMemory=\"100\"> <async enabled=\"true\" threadPoolSize=\"500\" flushLockTimeout=\"1\" modificationQueueSize=\"1024\" shutdownTimeout=\"25000\"/> </singleFile> </persistence> </namedCache>"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-Single_File_Cache_Store
|
Chapter 9. CSISnapshotController [operator.openshift.io/v1]
|
Chapter 9. CSISnapshotController [operator.openshift.io/v1] Description CSISnapshotController provides a means to configure an operator to manage the CSI snapshots. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 9.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 9.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 9.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 9.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 9.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 9.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 9.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/csisnapshotcontrollers DELETE : delete collection of CSISnapshotController GET : list objects of kind CSISnapshotController POST : create a CSISnapshotController /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name} DELETE : delete a CSISnapshotController GET : read the specified CSISnapshotController PATCH : partially update the specified CSISnapshotController PUT : replace the specified CSISnapshotController /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name}/status GET : read status of the specified CSISnapshotController PATCH : partially update status of the specified CSISnapshotController PUT : replace status of the specified CSISnapshotController 9.2.1. /apis/operator.openshift.io/v1/csisnapshotcontrollers HTTP method DELETE Description delete collection of CSISnapshotController Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CSISnapshotController Table 9.2. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotControllerList schema 401 - Unauthorized Empty HTTP method POST Description create a CSISnapshotController Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 202 - Accepted CSISnapshotController schema 401 - Unauthorized Empty 9.2.2. /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the CSISnapshotController HTTP method DELETE Description delete a CSISnapshotController Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSISnapshotController Table 9.9. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSISnapshotController Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSISnapshotController Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 401 - Unauthorized Empty 9.2.3. /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name}/status Table 9.15. Global path parameters Parameter Type Description name string name of the CSISnapshotController HTTP method GET Description read status of the specified CSISnapshotController Table 9.16. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CSISnapshotController Table 9.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.18. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CSISnapshotController Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/csisnapshotcontroller-operator-openshift-io-v1
|
Chapter 1. About Red Hat OpenShift Pipelines
|
Chapter 1. About Red Hat OpenShift Pipelines Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions. Note Because Red Hat OpenShift Pipelines releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift Pipelines documentation is now available as a separate documentation set at Red Hat OpenShift Pipelines .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/pipelines/about-pipelines
|
Chapter 3. Restarting the cluster gracefully
|
Chapter 3. Restarting the cluster gracefully This document describes the process to restart your cluster after a graceful shutdown. Even though the cluster is expected to be functional after the restart, the cluster might not recover due to unexpected conditions, for example: etcd data corruption during shutdown Node failure due to hardware Network connectivity issues If your cluster fails to recover, follow the steps to restore to a cluster state . 3.1. Prerequisites You have gracefully shut down your cluster . 3.2. Restarting the cluster You can restart your cluster after it has been shut down gracefully. Prerequisites You have access to the cluster as a user with the cluster-admin role. This procedure assumes that you gracefully shut down the cluster. Procedure Power on any cluster dependencies, such as external storage or an LDAP server. Start all cluster machines. Use the appropriate method for your cloud environment to start the machines, for example, from your cloud provider's web console. Wait approximately 10 minutes before continuing to check the status of control plane nodes (also known as the master nodes). Verify that all control plane nodes are ready. USD oc get nodes -l node-role.kubernetes.io/master The control plane nodes are ready if the status is Ready , as shown in the following output: NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 75m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 75m v1.20.0 If the control plane nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. Get the list of current CSRs: USD oc get csr Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid CSR: USD oc adm certificate approve <csr_name> After the control plane nodes are ready, verify that all worker nodes are ready. USD oc get nodes -l node-role.kubernetes.io/worker The worker nodes are ready if the status is Ready , as shown in the following output: NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 64m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 64m v1.20.0 If the worker nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. Get the list of current CSRs: USD oc get csr Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid CSR: USD oc adm certificate approve <csr_name> Verify that the cluster started properly. Check that there are no degraded cluster Operators. USD oc get clusteroperators Check that there are no cluster Operators with the DEGRADED condition set to True . NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 59m cloud-credential 4.7.0 True False False 85m cluster-autoscaler 4.7.0 True False False 73m config-operator 4.7.0 True False False 73m console 4.7.0 True False False 62m csi-snapshot-controller 4.7.0 True False False 66m dns 4.7.0 True False False 76m etcd 4.7.0 True False False 76m ... Check that all nodes are in the Ready state: USD oc get nodes Check that the status for all nodes is Ready . NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.20.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.20.0 If the cluster did not start properly, you might need to restore your cluster using an etcd backup. Additional resources See Restoring to a cluster state for how to use an etcd backup to restore if your cluster failed to recover after restarting.
|
[
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 75m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 75m v1.20.0",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 64m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 64m v1.20.0",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 59m cloud-credential 4.7.0 True False False 85m cluster-autoscaler 4.7.0 True False False 73m config-operator 4.7.0 True False False 73m console 4.7.0 True False False 62m csi-snapshot-controller 4.7.0 True False False 66m dns 4.7.0 True False False 76m etcd 4.7.0 True False False 76m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.20.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.20.0"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/backup_and_restore/graceful-restart-cluster
|
Chapter 5. Changes in Rust 1.71.1 Toolset
|
Chapter 5. Changes in Rust 1.71.1 Toolset Rust Toolset has been updated from version 1.66.1 to 1.71.1 on RHEL 8 and RHEL 9. Notable changes include: A new implementation of multiple producer, single consumer (mpsc) channels to improve performance. A new Cargo sparse index protocol for more efficient use of the crates.io registry. New OnceCell and OnceLock types for one-time value initialization. A new C-unwind ABI string to enable usage of forced unwinding across Foreign Function Interface (FFI) boundaries. For detailed information regarding the updates, see the series of upstream release announcements: Announcing Rust 1.67.0 Announcing Rust 1.68.0 Announcing Rust 1.69.0 Announcing Rust 1.70.0 Announcing Rust 1.71.0
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.71.1_toolset/assembly_changes-in-rust-toolset
|
Chapter 5. Unsupported features when using JBoss EAP in Microsoft Azure
|
Chapter 5. Unsupported features when using JBoss EAP in Microsoft Azure There are some unsupported features when using JBoss EAP in Microsoft Azure. ActiveMQ Artemis High Availability Using a Shared Store JBoss EAP messaging high availability using Artemis shared stores is not supported in Microsoft Azure. To configure JBoss EAP messaging high availability in Azure, see the instructions in the Configuration for ActiveMQ Artemis high availability in Microsoft Azure section. mod_custer Advertising If you want to use JBoss EAP as an Undertow mod_cluster proxy load balancer, the mod_cluster advertisement functionally is unsupported because of Azure UDP multicast limitations. For more information, see Configuration for load balancing with mod_cluster in Microsoft Azure . Virtual Machine Scale Set Transactions in Microsoft Azure Virtual Machine Scale Set (VMSS) is unsupported because the automatic scale down feature does not wait for all transactions to be completed during the scale down process. This could cause data integrity issues. Microsoft Azure VMSS destroys an EAP VM and does not support proper shutdown, which results in the following limitation for EAP clustering: VMSS should be supported for the configurations, in which the server-side state for HA needs to be externalized to a third-party service, such as Red Hat Data Grid. JBoss EAP supports VMSS for HttpSessions externalization, but not for the stateful session bean. Azure App Service JTS is not supported in the JBoss EAP Azure App Service offering. Note Although JTS is not supported, Jakarta Transactions are supported under the following conditions: The automatic removal of instances is disabled. Instances are not removed manually, for example, by reducing the number of running instances using the Azure dashboard. Also, transactions over Jakarta Enterprise Beans-remoting are not supported.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_red_hat_jboss_enterprise_application_platform_in_microsoft_azure/unsupported-features-when-using-server-in-microsoft-azure_default
|
Chapter 3. The Ceph client components
|
Chapter 3. The Ceph client components Ceph clients differ materially in how they present data storage interfaces. A Ceph block device presents block storage that mounts just like a physical storage drive. A Ceph gateway presents an object storage service with S3-compliant and Swift-compliant RESTful interfaces with its own user management. However, all Ceph clients use the Reliable Autonomic Distributed Object Store (RADOS) protocol to interact with the Red Hat Ceph Storage cluster. They all have the same basic needs: The Ceph configuration file, and the Ceph monitor address. The pool name. The user name and the path to the secret key. Ceph clients tend to follow some similar patterns, such as object-watch-notify and striping. The following sections describe a little bit more about RADOS, librados and common patterns used in Ceph clients. Prerequisites A basic understanding of distributed storage systems. 3.1. Ceph client native protocol Modern applications need a simple object storage interface with asynchronous communication capability. The Ceph Storage Cluster provides a simple object storage interface with asynchronous communication capability. The interface provides direct, parallel access to objects throughout the cluster. Pool Operations Snapshots Read/Write Objects Create or Remove Entire Object or Byte Range Append or Truncate Create/Set/Get/Remove XATTRs Create/Set/Get/Remove Key/Value Pairs Compound operations and dual-ack semantics 3.2. Ceph client object watch and notify A Ceph client can register a persistent interest with an object and keep a session to the primary OSD open. The client can send a notification message and payload to all watchers and receive notification when the watchers receive the notification. This enables a client to use any object as a synchronization/communication channel. 3.3. Ceph client Mandatory Exclusive Locks Mandatory Exclusive Locks is a feature that locks an RBD to a single client, if multiple mounts are in place. This helps address the write conflict situation when multiple mounted clients try to write to the same object. This feature is built on object-watch-notify explained in the section. So, when writing, if one client first establishes an exclusive lock on an object, another mounted client will first check to see if a peer has placed a lock on the object before writing. With this feature enabled, only one client can modify an RBD device at a time, especially when changing internal RBD structures during operations like snapshot create/delete . It also provides some protection for failed clients. For instance, if a virtual machine seems to be unresponsive and you start a copy of it with the same disk elsewhere, the first one will be blacklisted in Ceph and unable to corrupt the new one. Mandatory Exclusive Locks are not enabled by default. You have to explicitly enable it with --image-feature parameter when creating an image. Example Here, the numeral 5 is a summation of 1 and 4 where 1 enables layering support and 4 enables exclusive locking support. So, the above command will create a 100 GB rbd image, enable layering and exclusive lock. Mandatory Exclusive Locks is also a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. Mandatory Exclusive Locks also does some ground work for mirroring. 3.4. Ceph client object map Object map is a feature that tracks the presence of backing RADOS objects when a client writes to an rbd image. When a write occurs, that write is translated to an offset within a backing RADOS object. When the object map feature is enabled, the presence of these RADOS objects is tracked. So, we can know if the objects actually exist. Object map is kept in-memory on the librbd client so it can avoid querying the OSDs for objects that it knows don't exist. In other words, object map is an index of the objects that actually exist. Object map is beneficial for certain operations, viz: Resize Export Copy Flatten Delete Read A shrink resize operation is like a partial delete where the trailing objects are deleted. An export operation knows which objects are to be requested from RADOS. A copy operation knows which objects exist and need to be copied. It does not have to iterate over potentially hundreds and thousands of possible objects. A flatten operation performs a copy-up for all parent objects to the clone so that the clone can be detached from the parent i.e, the reference from the child clone to the parent snapshot can be removed. So, instead of all potential objects, copy-up is done only for the objects that exist. A delete operation deletes only the objects that exist in the image. A read operation skips the read for objects it knows doesn't exist. So, for operations like resize, shrinking only, exporting, copying, flattening, and deleting, these operations would need to issue an operation for all potentially affected RADOS objects, whether they exist or not. With object map enabled, if the object doesn't exist, the operation need not be issued. For example, if we have a 1 TB sparse RBD image, it can have hundreds and thousands of backing RADOS objects. A delete operation without object map enabled would need to issue a remove object operation for each potential object in the image. But if object map is enabled, it only needs to issue remove object operations for the objects that exist. Object map is valuable against clones that don't have actual objects but get objects from parents. When there is a cloned image, the clone initially has no objects and all reads are redirected to the parent. So, object map can improve reads as without the object map, first it needs to issue a read operation to the OSD for the clone, when that fails, it issues another read to the parent - with object map enabled. It skips the read for objects it knows doesn't exist. Object map is not enabled by default. You have to explicitly enable it with --image-features parameter when creating an image. Also, Mandatory Exclusive Locks is a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. To enable object map support when creating a image, execute: Here, the numeral 13 is a summation of 1 , 4 and 8 where 1 enables layering support, 4 enables exclusive locking support and 8 enables object map support. So, the above command will create a 100 GB rbd image, enable layering, exclusive lock and object map. 3.5. Ceph client data stripping Storage devices have throughput limitations, which impact performance and scalability. So storage systems often support striping- storing sequential pieces of information across multiple storage devices- to increase throughput and performance. The most common form of data striping comes from RAID. The RAID type most similar to Ceph's striping is RAID 0, or a 'striped volume.' Ceph's striping offers the throughput of RAID 0 striping, the reliability of n-way RAID mirroring and faster recovery. Ceph provides three types of clients: Ceph Block Device, Ceph Filesystem, and Ceph Object Storage. A Ceph Client converts its data from the representation format it provides to its users, such as a block device image, RESTful objects, CephFS filesystem directories, into objects for storage in the Ceph Storage Cluster. Tip The objects Ceph stores in the Ceph Storage Cluster are not striped. Ceph Object Storage, Ceph Block Device, and the Ceph Filesystem stripe their data over multiple Ceph Storage Cluster objects. Ceph Clients that write directly to the Ceph storage cluster using librados must perform the striping, and parallel I/O for themselves to obtain these benefits. The simplest Ceph striping format involves a stripe count of 1 object. Ceph Clients write stripe units to a Ceph Storage Cluster object until the object is at its maximum capacity, and then create another object for additional stripes of data. The simplest form of striping may be sufficient for small block device images, S3 or Swift objects. However, this simple form doesn't take maximum advantage of Ceph's ability to distribute data across placement groups, and consequently doesn't improve performance very much. The following diagram depicts the simplest form of striping: If you anticipate large images sizes, large S3 or Swift objects for example, video, you may see considerable read/write performance improvements by striping client data over multiple objects within an object set. Significant write performance occurs when the client writes the stripe units to their corresponding objects in parallel. Since objects get mapped to different placement groups and further mapped to different OSDs, each write occurs in parallel at the maximum write speed. A write to a single disk would be limited by the head movement for example, 6ms per seek and bandwidth of that one device for example, 100MB/s. By spreading that write over multiple objects, which map to different placement groups and OSDs, Ceph can reduce the number of seeks per drive and combine the throughput of multiple drives to achieve much faster write or read speeds. Note Striping is independent of object replicas. Since CRUSH replicates objects across OSDs, stripes get replicated automatically. In the following diagram, client data gets striped across an object set ( object set 1 in the following diagram) consisting of 4 objects, where the first stripe unit is stripe unit 0 in object 0 , and the fourth stripe unit is stripe unit 3 in object 3 . After writing the fourth stripe, the client determines if the object set is full. If the object set is not full, the client begins writing a stripe to the first object again, see object 0 in the following diagram. If the object set is full, the client creates a new object set, see object set 2 in the following diagram, and begins writing to the first stripe, with a stripe unit of 16, in the first object in the new object set, see object 4 in the diagram below. Three important variables determine how Ceph stripes data: Object Size: Objects in the Ceph Storage Cluster have a maximum configurable size, such as 2 MB, or 4 MB. The object size should be large enough to accommodate many stripe units, and should be a multiple of the stripe unit. Important Red Hat recommends a safe maximum value of 16 MB. Stripe Width: Stripes have a configurable unit size, for example 64 KB. The Ceph Client divides the data it will write to objects into equally sized stripe units, except for the last stripe unit. A stripe width should be a fraction of the Object Size so that an object may contain many stripe units. Stripe Count: The Ceph Client writes a sequence of stripe units over a series of objects determined by the stripe count. The series of objects is called an object set. After the Ceph Client writes to the last object in the object set, it returns to the first object in the object set. Important Test the performance of your striping configuration before putting your cluster into production. You CANNOT change these striping parameters after you stripe the data and write it to objects. Once the Ceph Client has striped data to stripe units and mapped the stripe units to objects, Ceph's CRUSH algorithm maps the objects to placement groups, and the placement groups to Ceph OSD Daemons before the objects are stored as files on a storage disk. Note Since a client writes to a single pool, all data striped into objects get mapped to placement groups in the same pool. So they use the same CRUSH map and the same access controls. 3.6. Ceph on-wire encryption You can enable encryption for all Ceph traffic over the network with the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, giving you end-to-end encryption. The second version of Ceph's on-wire protocol, msgr2 , includes several new features: A secure mode encrypting all data moving through the network. Encapsulation improvement of authentication payloads. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy, v1-compatible, and the new, v2-compatible, Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon will try to use the v2 protocol first, if possible, but if not, then the legacy v1 protocol will be used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. The messenger v2 protocol has two configuration options that control whether the v1 or the v2 protocol is used: ms_bind_msgr1 - This option controls whether a daemon binds to a port speaking the v1 protocol; it is true by default. ms_bind_msgr2 - This option controls whether a daemon binds to a port speaking the v2 protocol; it is true by default. Similarly, two options control based on IPv4 and IPv6 addresses used: ms_bind_ipv4 - This option controls whether a daemon binds to an IPv4 address; it is true by default. ms_bind_ipv6 - This option controls whether a daemon binds to an IPv6 address; it is true by default. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx . Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx . Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . Ensure that you consider cluster CPU requirements when you plan the Red Hat Ceph Storage cluster, to include encryption overhead. Important Using secure mode is currently supported by Ceph kernel clients, such as CephFS and krbd on Red Hat Enterprise Linux. Using secure mode is supported by Ceph clients using librbd , such as OpenStack Nova, Glance, and Cinder. Address Changes For both versions of the messenger protocol to coexist in the same storage cluster, the address formatting has changed: Old address format was, IP_ADDR : PORT / CLIENT_ID , for example, 1.2.3.4:5678/91011 . New address format is, PROTOCOL_VERSION : IP_ADDR : PORT / CLIENT_ID , for example, v2:1.2.3.4:5678/91011 , where PROTOCOL_VERSION can be either v1 or v2 . Because the Ceph daemons now bind to multiple ports, the daemons display multiple addresses instead of a single address. Here is an example from a dump of the monitor map: Also, the mon_host configuration option and specifying addresses on the command line, using -m , supports the new address format. Connection Phases There are four phases for making an encrypted connection: Banner On connection, both the client and the server send a banner. Currently, the Ceph banner is ceph 0 0n . Authentication Exchange All data, sent or received, is contained in a frame for the duration of the connection. The server decides if authentication has completed, and what the connection mode will be. The frame format is fixed, and can be in three different forms depending on the authentication flags being used. Message Flow Handshake Exchange The peers identify each other and establish a session. The client sends the first message, and the server will reply with the same message. The server can close connections if the client talks to the wrong daemon. For new sessions, the client and server proceed to exchanging messages. Client cookies are used to identify a session, and can reconnect to an existing session. Message Exchange The client and server start exchanging messages, until the connection is closed. Additional Resources See the Red Hat Ceph Storage Data Security and Hardening Guide for details on enabling the msgr2 protocol.
|
[
"rbd create --size 102400 mypool/myimage --image-feature 5",
"rbd -p mypool create myimage --size 102400 --image-features 13",
"epoch 1 fsid 50fcf227-be32-4bcb-8b41-34ca8370bd17 last_changed 2021-12-12 11:10:46.700821 created 2021-12-12 11:10:46.700821 min_mon_release 14 (nautilus) 0: [v2:10.0.0.10:3300/0,v1:10.0.0.10:6789/0] mon.a 1: [v2:10.0.0.11:3300/0,v1:10.0.0.11:6789/0] mon.b 2: [v2:10.0.0.12:3300/0,v1:10.0.0.12:6789/0] mon.c"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/architecture_guide/the-ceph-client-components
|
8.6. The Replication Queue
|
8.6. The Replication Queue In replication mode, Red Hat JBoss Data Grid uses a replication queue to replicate changes across nodes based on the following: Previously set intervals. The queue size exceeding the number of elements. A combination of previously set intervals and the queue size exceeding the number of elements. The replication queue ensures that during replication, cache operations are transmitted in batches instead of individually. As a result, a lower number of replication messages are transmitted and fewer envelopes are used, resulting in improved JBoss Data Grid performance. A disadvantage of using the replication queue is that the queue is periodically flushed based on the time or the queue size. Such flushing operations delay the realization of replication, distribution, or invalidation operations across cluster nodes. When the replication queue is disabled, the data is directly transmitted and therefore the data arrives at the cluster nodes faster. A replication queue is used in conjunction with asynchronous mode. Report a bug 8.6.1. Replication Queue Usage When using the replication queue, do one of the following: Disable asynchronous marshalling. Set the max-threads count value to 1 for the transport executor . The transport executor is defined in standalone.xml or clustered.xml as follows: To implement either of these solutions, the replication queue must be in use in asynchronous mode. Asynchronous mode can be set, along with the queue timeout ( queue-flush-interval , value is in milliseconds) and queue size ( queue-size ) as follows: Example 8.1. Replication Queue in Asynchronous Mode The replication queue allows requests to return to the client faster, therefore using the replication queue together with asynchronous marshalling does not present any significant advantages. Report a bug
|
[
"<transport executor=\"infinispan-transport\"/>",
"<replicated-cache name=\"asyncCache\" start=\"EAGER\" mode=\"ASYNC\" batching=\"false\" indexing=\"NONE\" statistics=\"true\" queue-size=\"1000\" queue-flush-interval=\"500\"> <!-- Additional configuration information here --> </replicated-cache>"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-the_replication_queue
|
Part III. Installing and Managing Software
|
Part III. Installing and Managing Software All software on a Red Hat Enterprise Linux system is divided into RPM packages, which can be installed, upgraded, or removed. This part describes how to manage packages on Red Hat Enterprise Linux using Yum .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/part-installing_and_managing_software
|
Appendix A. Using your subscription
|
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 6 - Registering the system and managing subscriptions Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/using_your_subscription
|
Chapter 4. Creating a playbook project
|
Chapter 4. Creating a playbook project 4.1. Scaffolding a playbook project The following steps describe the process for scaffolding a new playbook project with the Ansible VS Code extension. Prerequisites You have installed Ansible development tools. You have installed and opened the Ansible VS Code extension. You have identified a directory where you want to save the project. Procedure Open VS Code. Click the Ansible icon in the VS Code activity bar to open the Ansible extension. Select Get started in the Ansible content creator section. The Ansible content creator tab opens. In the Create section, click Ansible playbook project . The Create Ansible project tab opens. In the form in the Create Ansible project tab, enter the following: Destination directory : Enter the path to the directory where you want to scaffold your new playbook project. Note If you enter an existing directory name, the scaffolding process overwrites the contents of that directory. The scaffold process only allows you to use an existing directory if you enable the Force option. If you are using the containerized version of Ansible Dev tools, the destination directory path is relative to the container, not a path in your local system. To discover the current directory name in the container, run the pwd command in a terminal in VS Code. If the current directory in the container is workspaces , enter workspaces/<destination_directory_name> . If you are using a locally installed version of Ansible Dev tools, enter the full path to the directory, for example /user/<username>/projects/<destination_directory_name> . SCM organization and SCM project : Enter a name for the directory and subdirectory where you can store roles that you create for your playbooks. Enter a name for the directory where you want to scaffold your new playbook project. Verification After the project directory has been created, the following message appears in the Logs pane of the Create Ansible Project tab. In this example, the destination directory name is destination_directory_name . ------------------ ansible-creator logs ------------------ Note: ansible project created at /Users/username/test_project The following directories and files are created in your project directory: USD tree -a -L 5 . βββ .devcontainer β βββ devcontainer.json β βββ docker β β βββ devcontainer.json β βββ podman β βββ devcontainer.json βββ .gitignore βββ README.md βββ ansible-navigator.yml βββ ansible.cfg βββ collections β βββ ansible_collections β β βββ scm_organization_name β β βββ scm_project_name β βββ requirements.yml βββ devfile.yaml βββ inventory β βββ group_vars β β βββ all.yml β β βββ web_servers.yml β βββ host_vars β β βββ server1.yml β β βββ server2.yml β β βββ server3.yml β β βββ switch1.yml β β βββ switch2.yml β βββ hosts.yml βββ linux_playbook.yml βββ network_playbook.yml βββ site.yml
|
[
"------------------ ansible-creator logs ------------------ Note: ansible project created at /Users/username/test_project",
"tree -a -L 5 . βββ .devcontainer β βββ devcontainer.json β βββ docker β β βββ devcontainer.json β βββ podman β βββ devcontainer.json βββ .gitignore βββ README.md βββ ansible-navigator.yml βββ ansible.cfg βββ collections β βββ ansible_collections β β βββ scm_organization_name β β βββ scm_project_name β βββ requirements.yml βββ devfile.yaml βββ inventory β βββ group_vars β β βββ all.yml β β βββ web_servers.yml β βββ host_vars β β βββ server1.yml β β βββ server2.yml β β βββ server3.yml β β βββ switch1.yml β β βββ switch2.yml β βββ hosts.yml βββ linux_playbook.yml βββ network_playbook.yml βββ site.yml"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/creating-playbook-project
|
Getting started
|
Getting started OpenShift Container Platform 4.10 Getting started in OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"/ws/data/load",
"Items inserted in database: 2893",
"oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify",
"oc login <https://api.your-openshift-server.com> --token=<tokenID>",
"oc new-project user-getting-started --display-name=\"Getting Started with OpenShift\"",
"Now using project \"user-getting-started\" on server \"https://openshift.example.com:6443\".",
"oc adm policy add-role-to-user view -z default -n user-getting-started",
"oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app'",
"--> Found container image 0c2f55f (12 months old) from quay.io for \"quay.io/openshiftroadshow/parksmap:latest\" * An image stream tag will be created as \"parksmap:latest\" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend imagestream.image.openshift.io \"parksmap\" created deployment.apps \"parksmap\" created service \"parksmap\" created --> Success",
"oc get service",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s",
"oc create route edge parksmap --service=parksmap",
"route.route.openshift.io/parksmap created",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None",
"oc get pods",
"NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s",
"oc describe pods",
"Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/networks-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" Normal Pulled 35s kubelet Successfully pulled image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap",
"oc scale --current-replicas=1 --replicas=2 deployment/parksmap",
"deployment.apps/parksmap scaled",
"oc get pods",
"NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s",
"oc scale --current-replicas=2 --replicas=1 deployment/parksmap",
"oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true",
"--> Found image 0406f6c (13 days old) in image stream \"openshift/python\" under tag \"3.9-ubi8\" for \"python\" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag \"nationalparks:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend imagestream.image.openshift.io \"nationalparks\" created buildconfig.build.openshift.io \"nationalparks\" created deployment.apps \"nationalparks\" created service \"nationalparks\" created --> Success",
"oc create route edge nationalparks --service=nationalparks",
"route.route.openshift.io/parksmap created",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None",
"oc new-app quay.io/centos7/mongodb-36-centos7 --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb'",
"--> Found container image dc18f52 (8 months old) from quay.io for \"quay.io/centos7/mongodb-36-centos7\" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as \"mongodb-nationalparks:latest\" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app imagestream.image.openshift.io \"mongodb-nationalparks\" created deployment.apps \"mongodb-nationalparks\" created service \"mongodb-nationalparks\" created --> Success",
"oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb",
"secret/nationalparks-mongodb-parameters created",
"oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks",
"deployment.apps/nationalparks updated",
"oc rollout status deployment nationalparks",
"deployment \"nationalparks\" successfully rolled out",
"oc rollout status deployment mongodb-nationalparks",
"deployment \"nationalparks\" successfully rolled out deployment \"mongodb-nationalparks\" successfully rolled out",
"oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load",
"\"Items inserted in database: 2893\"",
"oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all",
", {\"id\": \"Great Zimbabwe\", \"latitude\": \"-20.2674635\", \"longitude\": \"30.9337986\", \"name\": \"Great Zimbabwe\"}]",
"oc label route nationalparks type=parksmap-backend",
"route.route.openshift.io/nationalparks labeled",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/getting_started/index
|
Chapter 4. Gaining Privileges
|
Chapter 4. Gaining Privileges System administrators (and in some cases users) will need to perform certain tasks with administrative access. Accessing the system as root is potentially dangerous and can lead to widespread damage to the system and data. This chapter covers ways to gain administrative privileges using the su and sudo programs. These programs allow specific users to perform tasks which would normally be available only to the root user while maintaining a higher level of control and system security. See the Red Hat Enterprise Linux 6 Security Guide for more information on administrative controls, potential dangers and ways to prevent data loss resulting from improper use of privileged access. 4.1. The su Command When a user executes the su command, they are prompted for the root password and, after authentication, are given a root shell prompt. Once logged in via the su command, the user is the root user and has absolute administrative access to the system [1] . In addition, once a user has become root, it is possible for them to use the su command to change to any other user on the system without being prompted for a password. Because this program is so powerful, administrators within an organization may want to limit who has access to the command. One of the simplest ways to do this is to add users to the special administrative group called wheel . To do this, type the following command as root: In the command, replace username with the user name you want to add to the wheel group. You can also use the User Manager to modify group memberships, as follows. Note: you need Administrator privileges to perform this procedure. Click the System menu on the Panel, point to Administration and then click Users and Groups to display the User Manager. Alternatively, type the command system-config-users at a shell prompt. Click the Users tab, and select the required user in the list of users. Click Properties on the toolbar to display the User Properties dialog box (or choose Properties on the File menu). Click the Groups tab, select the check box for the wheel group, and then click OK . See Section 3.2, "Managing Users via the User Manager Application" for more information about the User Manager . After you add the desired users to the wheel group, it is advisable to only allow these specific users to use the su command. To do this, you will need to edit the PAM configuration file for su : /etc/pam.d/su . Open this file in a text editor and remove the comment ( # ) from the following line: This change means that only members of the administrative group wheel can switch to another user using the su command. Note The root user is part of the wheel group by default. [1] This access is still subject to the restrictions imposed by SELinux, if it is enabled.
|
[
"~]# usermod -a -G wheel username",
"#auth required pam_wheel.so use_uid"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/chap-deployment_guide-gaining_privileges
|
Product Guide
|
Product Guide Red Hat OpenStack Platform 17.0 Overview of Red Hat OpenStack Platform OpenStack Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/product_guide/index
|
Chapter 4. Adding Feature Packs to existing JBoss EAP Servers using the jboss-eap-installation-manager
|
Chapter 4. Adding Feature Packs to existing JBoss EAP Servers using the jboss-eap-installation-manager You can use jboss-eap-installation-manager to install and update JBoss EAP. You can also install and update feature packs on your existing JBoss EAP servers using jboss-eap-installation-manager . 4.1. Prerequisites You have an account on the Red Hat Customer Portal and are logged in. You have reviewed the supported configurations for JBoss EAP 8.0. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . 4.2. Adding Feature Packs to an existing JBoss EAP installation You can install feature packs on existing JBoss EAP servers using jboss-eap-installation-manager . Prerequisites You have installed JBoss EAP 8.0. Procedure Open your terminal emulator and navigate to the directory containing the downloaded jboss-eap-installation-manager . Define the channels, that provide the feature packs, you want your JBoss EAP server subscribed to: ./jboss-eap-installation-manager.sh channel add \ --channel-name feature-pack-channel \ --repositories https://fp.repo.org/maven \ --manifest com.example:feature-pack Create and subscribe your JBoss EAP installation to a channel providing additional artifacts. For more information see Creating and subscribing your JBoss EAP installation to a channel providing additional artifacts . Select layers and configuration of the installed Feature Pack The feature-pack add command is used to select which feature pack layer(s) need to be installed and which configuration files should be modified. Provide the necessary layers for the installation. USD ./jboss-eap-installation-manager.sh feature-pack add \ --fpl com.example:feature-pack \ --layers layer-one,layer-two \ --dir jboss-eap8 Provide the necessary layers for the installation and choose the configuration file you want to modify. USD ./jboss-eap-installation-manager.sh feature-pack add \ --fpl com.example:feature-pack \ --layers layer-one,layer-two \ --target-config standalone-ha.xml --dir jboss-eap8 Note You can modify the default feature pack configuration file by using the --target-config parameter. This parameter selects the configuration file in the standalone/configuration folder that will receive the changes. If you have changed the chosen configuration file before, the updates won't overwrite it. Instead, the changes will be saved in a new file named with a .glnew ending. You'll need to handle and merge any conflicts yourself. The feature pack can only modify standalone configuration files and not files in a managed domain. The supported values for the --target-config parameter are the names of the configuration files provided by the base EAP8/XP5 servers. Additional feature packs should not provide additional configurations. Additional resources Creating and subscribing your JBoss EAP installation to a channel providing additional artifacts . 4.3. Adding feature packs to existing JBoss EAP servers You can add additional feature packs when installing a JBoss EAP server with jboss-eap-installation-manager . Prerequisites You have an account on the Red Hat Customer Portal and are logged in. You have reviewed the supported configurations for JBoss EAP 8.0. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . Procedure Open the terminal emulator and navigate to the directory containing jboss-eap-installation-manager . Create a provisioning.xml file and define the feature packs you want to install: <?xml version="1.0" ?> <installation xmlns="urn:jboss:galleon:provisioning:3.0"> <feature-pack location="org.jboss.eap:wildfly-ee-galleon-pack:zip"> <packages> <include name="docs.examples.configs"/> </packages> </feature-pack> <feature-pack location="<FEATURE_PACK_GROUP_ID>:<FEATURE_ARTIFACT_ID>:zip"> <default-configs inherit="false"/> <packages inherit="false"/> </feature-pack> <config model="standalone" name="standalone.xml"> <layers> <include name="<FEATURE_PACK_LAYER>"/> </layers> </config> </installation> Create a channels.yaml file, and define the channels you want JBoss EAP subscribed to. schemaVersion: "2.0.0" name: "eap-8.0" repositories: - id: "mrrc" url: "file:/Users/spyrkob/workspaces/set/prospero/prod-prospero/jboss-eap-8.0.0.GA-maven-repository/maven-repository" manifest: maven: groupId: "org.jboss.eap.channels" artifactId: "eap-8.0" --- schemaVersion: "2.0.0" name: "feature-pack-channel" repositories: - id: "feature-pack-repository" url: "https://repository.example.com/feature-pack" manifest: maven: groupId: "com.example.channels" artifactId: "feature-pack" Install feature packs using the --definition and --channel parameters: ./jboss-eap-installation-manager.sh install \ --definition provisioning.xml \ --channel channels.yaml \ --dir jboss-eap8 Installing galleon provisioning definition: provisioning.xml Using channels: # eap-8.0 manifest: org.jboss.eap.channels:eap-8.0 repositories: id: mrrc url: file:/tmp/jboss/jboss-eap-8.0.0.GA-maven-repository/maven-repository #feature-pack-channel manifest: com.example.channels:feature-pack repositories: id: feature-pack-repository url: https://repository.example.com/feature-pack =============== END USER LICENSE AGREEMENT RED HAT JBOSS(R) MIDDLEWARETM =============== [...] =============== Accept the agreement(s) [y/N]y Feature-packs resolved. Packages installed. Downloaded artifacts. JBoss modules installed. Configurations generated. JBoss examples installed. Server created in /tmp/jboss/jboss-eap8 Operation completed in 16.30 seconds 4.4. Adding Feature Packs to JBoss EAP servers by using an offline repository You can use the jboss-eap-installation-manager to add feature packs when installing a JBoss EAP from an offline repository. Prerequisites You have downloaded the JBoss EAP 8.0 offline repository from the Red Hat Customer Portal . If required, you have the downloaded feature pack repository archive file from the Red Hat Customer Portal . Note Downloading the feature pack offline repository is an optional prerequisite because some feature packs are already included in the JBoss EAP 8.0 offline repository. Procedure Open the terminal emulator and navigate to the directory containing jboss-eap-installation-manager . Install the required feature pack into JBoss EAP and specify the offline repositories by using --repositories parameter: USD ./jboss-eap-installation-manager.sh feature-pack add \ --fpl com.example:feature-pack \ --layers layer-one,layer-two \ --repositories file:/path/to/eap8/offline_repo,file:/path/to/feature_pack/offline_repo --dir jboss-eap8 4.5. Reverting installed feature packs You can use the jboss-eap-installation-manager to revert a feature pack previously added to your JBoss EAP server: Prerequisites You have added feature packs to your JBoss EAP server. Procedure Open the terminal emulator and navigate to the directory containing the downloaded jboss-eap-installation-manager . Investigate the history of all feature packs added to your JBoss EAP server: USD ./jboss-eap-installation-manager.sh history --dir jboss-eap-8.0 [79a553e7] 2023-08-23T13:39:10Z - feature_pack [org.jboss.eap.channels:eap-8.0::1.0.1.GA][com.example.channels:myfaces::1.0.0] [744013d2] 2023-08-23T13:38:16Z - config_change [928fe586] 2023-08-23T13:22:11Z - install [org.jboss.eap.channels:eap-8.0::1.0.1.GA] Stop JBoss EAP server. Revert your to a version. USD ./jboss-eap-installation-manager.sh revert perform --revision 744013d2 --dir jboss-eap-8.0 Reverting server /tmp/jboss/jboss-eap-8.0 to state 744013d2 Feature-packs resolved. Packages installed. Downloaded artifacts. JBoss modules installed. Configurations generated. JBoss examples installed. Reverted server prepared, comparing changes Changes found: org.jboss.eap:eap-myfaces-feature-pack 8.0.0.GA-redhat-20230816 ==> [] org.apache.myfaces.core:myfaces-api 4.0.0 ==> [] org.jboss.eap:eap-myfaces-injection 8.0.0.GA-redhat-20230816 ==> [] org.apache.myfaces.core:myfaces-impl 4.0.0 ==> [] Continue with revert [y/N]: y Applying changes Server reverted to state 977f97dd. Operation completed in 51.17 seconds. 4.6. Creating and subscribing to channels when installing JBoss EAP to provide additional artifacts Some feature packs require additional artifacts that are not supplied by Red Hat. You have to provide the required artifacts by defining custom channels. For example, the MyFaces feature pack requires the org.apache.myfaces.core:myfaces-impl and org.apache.myfaces.core:myfaces-api jar files. However, it is up to you to determine the precise versions of these jars. Note The following procedure describes how to create a channel that provides additional artifacts for the MyFaces feature pack. Prerequisite You have an account on the Red Hat Customer Portal and are logged in. You have reviewed the supported configurations for JBoss EAP 8.0. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . Your JBoss EAP installation has a feature pack installed on it. Procedure Open the terminal emulator and navigate to the directory containing jboss-eap-installation-manager . Create a manifest.yaml file: schemaVersion: 1.0.0 name: MyFaces manifest file streams: - groupId: org.apache.myfaces.core artifactId: myfaces-impl version: 4.0.0 - groupId: org.apache.myfaces.core artifactId: myfaces-api version: 4.0.0 Deploy the manifest to a local repository: mvn deploy:deploy-file -Dfile=manifest.yaml \ -DgroupId=com.example.channels -DartifactId=myfaces \ -Dclassifier=manifest -Dpackaging=yaml -Dversion=1.0.0 \ -Durl=file:/path/to/local/repository Subscribe your JBoss EAP server to the new channel: USD ./jboss-eap-installation-manager.sh channel add \ --channel-name myfaces-channel \ --repositories https://repo1.maven.org/maven2,file:/path/to/local/repository \ --manifest com.example.channels:myfaces \ --dir jboss-eap8
|
[
"./jboss-eap-installation-manager.sh channel add --channel-name feature-pack-channel --repositories https://fp.repo.org/maven --manifest com.example:feature-pack",
"./jboss-eap-installation-manager.sh feature-pack add --fpl com.example:feature-pack --layers layer-one,layer-two --dir jboss-eap8",
"./jboss-eap-installation-manager.sh feature-pack add --fpl com.example:feature-pack --layers layer-one,layer-two --target-config standalone-ha.xml --dir jboss-eap8",
"<?xml version=\"1.0\" ?> <installation xmlns=\"urn:jboss:galleon:provisioning:3.0\"> <feature-pack location=\"org.jboss.eap:wildfly-ee-galleon-pack:zip\"> <packages> <include name=\"docs.examples.configs\"/> </packages> </feature-pack> <feature-pack location=\"<FEATURE_PACK_GROUP_ID>:<FEATURE_ARTIFACT_ID>:zip\"> <default-configs inherit=\"false\"/> <packages inherit=\"false\"/> </feature-pack> <config model=\"standalone\" name=\"standalone.xml\"> <layers> <include name=\"<FEATURE_PACK_LAYER>\"/> </layers> </config> </installation>",
"schemaVersion: \"2.0.0\" name: \"eap-8.0\" repositories: - id: \"mrrc\" url: \"file:/Users/spyrkob/workspaces/set/prospero/prod-prospero/jboss-eap-8.0.0.GA-maven-repository/maven-repository\" manifest: maven: groupId: \"org.jboss.eap.channels\" artifactId: \"eap-8.0\" --- schemaVersion: \"2.0.0\" name: \"feature-pack-channel\" repositories: - id: \"feature-pack-repository\" url: \"https://repository.example.com/feature-pack\" manifest: maven: groupId: \"com.example.channels\" artifactId: \"feature-pack\"",
"./jboss-eap-installation-manager.sh install --definition provisioning.xml --channel channels.yaml --dir jboss-eap8 Installing galleon provisioning definition: provisioning.xml Using channels: eap-8.0 manifest: org.jboss.eap.channels:eap-8.0 repositories: id: mrrc url: file:/tmp/jboss/jboss-eap-8.0.0.GA-maven-repository/maven-repository #feature-pack-channel manifest: com.example.channels:feature-pack repositories: id: feature-pack-repository url: https://repository.example.com/feature-pack =============== END USER LICENSE AGREEMENT RED HAT JBOSS(R) MIDDLEWARETM =============== [...] =============== Accept the agreement(s) [y/N]y Feature-packs resolved. Packages installed. Downloaded artifacts. JBoss modules installed. Configurations generated. JBoss examples installed. Server created in /tmp/jboss/jboss-eap8 Operation completed in 16.30 seconds",
"./jboss-eap-installation-manager.sh feature-pack add --fpl com.example:feature-pack --layers layer-one,layer-two --repositories file:/path/to/eap8/offline_repo,file:/path/to/feature_pack/offline_repo --dir jboss-eap8",
"./jboss-eap-installation-manager.sh history --dir jboss-eap-8.0 [79a553e7] 2023-08-23T13:39:10Z - feature_pack [org.jboss.eap.channels:eap-8.0::1.0.1.GA][com.example.channels:myfaces::1.0.0] [744013d2] 2023-08-23T13:38:16Z - config_change [928fe586] 2023-08-23T13:22:11Z - install [org.jboss.eap.channels:eap-8.0::1.0.1.GA]",
"./jboss-eap-installation-manager.sh revert perform --revision 744013d2 --dir jboss-eap-8.0 Reverting server /tmp/jboss/jboss-eap-8.0 to state 744013d2 Feature-packs resolved. Packages installed. Downloaded artifacts. JBoss modules installed. Configurations generated. JBoss examples installed. Reverted server prepared, comparing changes Changes found: org.jboss.eap:eap-myfaces-feature-pack 8.0.0.GA-redhat-20230816 ==> [] org.apache.myfaces.core:myfaces-api 4.0.0 ==> [] org.jboss.eap:eap-myfaces-injection 8.0.0.GA-redhat-20230816 ==> [] org.apache.myfaces.core:myfaces-impl 4.0.0 ==> [] Continue with revert [y/N]: y Applying changes Server reverted to state 977f97dd. Operation completed in 51.17 seconds.",
"schemaVersion: 1.0.0 name: MyFaces manifest file streams: - groupId: org.apache.myfaces.core artifactId: myfaces-impl version: 4.0.0 - groupId: org.apache.myfaces.core artifactId: myfaces-api version: 4.0.0",
"mvn deploy:deploy-file -Dfile=manifest.yaml -DgroupId=com.example.channels -DartifactId=myfaces -Dclassifier=manifest -Dpackaging=yaml -Dversion=1.0.0 -Durl=file:/path/to/local/repository",
"./jboss-eap-installation-manager.sh channel add --channel-name myfaces-channel --repositories https://repo1.maven.org/maven2,file:/path/to/local/repository --manifest com.example.channels:myfaces --dir jboss-eap8"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_enterprise_application_platform_installation_methods/assembly_adding-feature-packs-in-your-jboss-eap-installation-using-the-jboss-eap-installation-manager_default
|
Chapter 10. Worker nodes for single-node OpenShift clusters
|
Chapter 10. Worker nodes for single-node OpenShift clusters 10.1. Adding worker nodes to single-node OpenShift clusters Single-node OpenShift clusters reduce the host prerequisites for deployment to a single host. This is useful for deployments in constrained environments or at the network edge. However, sometimes you need to add additional capacity to your cluster, for example, in telecommunications and network edge scenarios. In these scenarios, you can add worker nodes to the single-node cluster. Note Unlike multi-node clusters, by default all ingress traffic is routed to the single control-plane node, even after adding additional worker nodes. There are several ways that you can add worker nodes to a single-node cluster. You can add worker nodes to a cluster manually, using Red Hat OpenShift Cluster Manager , or by using the Assisted Installer REST API directly. Important Adding worker nodes does not expand the cluster control plane, and it does not provide high availability to your cluster. For single-node OpenShift clusters, high availability is handled by failing over to another site. When adding worker nodes to single-node OpenShift clusters, a tested maximum of two worker nodes is recommended. Exceeding the recommended number of worker nodes might result in lower overall performance, including cluster failure. Note To add worker nodes, you must have access to the OpenShift Cluster Manager. This method is not supported when using the Agent-based installer to install a cluster in a disconnected environment. 10.1.1. Requirements for installing single-node OpenShift worker nodes To install a single-node OpenShift worker node, you must address the following requirements: Administration host: You must have a computer to prepare the ISO and to monitor the installation. Production-grade server: Installing single-node OpenShift worker nodes requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 10.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 2 vCPU cores 8GB of RAM 100GB Note One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Networking: The worker node server must have access to the internet or access to a local registry if it is not connected to a routable network. The worker node server must have a DHCP reservation or a static IP address and be able to access the single-node OpenShift cluster Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN) for the single-node OpenShift cluster: Table 10.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster. Without persistent IP addresses, communications between the apiserver and etcd might fail. Additional resources Minimum resource requirements for cluster installation Recommended practices for scaling the cluster User-provisioned DNS requirements Creating a bootable ISO image on a USB drive Booting from an ISO image served over HTTP using the Redfish API Deleting nodes from a cluster 10.1.2. Adding worker nodes using the Assisted Installer and OpenShift Cluster Manager You can add worker nodes to single-node OpenShift clusters that were created on Red Hat OpenShift Cluster Manager using the Assisted Installer . Important Adding worker nodes to single-node OpenShift clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up. Prerequisites Have access to a single-node OpenShift cluster installed using Assisted Installer . Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Log in to OpenShift Cluster Manager and click the single-node cluster that you want to add a worker node to. Click Add hosts , and download the discovery ISO for the new worker node, adding SSH public key and configuring cluster-wide proxy settings as required. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. After the host is discovered, start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation. When the worker node is sucessfully installed, it is listed as a worker node in the cluster web console. Important New worker nodes will be encrypted using the same method as the original cluster. Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.3. Adding worker nodes using the Assisted Installer API You can add worker nodes to single-node OpenShift clusters using the Assisted Installer REST API. Before you add worker nodes, you must log in to OpenShift Cluster Manager and authenticate against the API. 10.1.3.1. Authenticating against the Assisted Installer REST API Before you can use the Assisted Installer REST API, you must authenticate against the API using a JSON web token (JWT) that you generate. Prerequisites Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Procedure Log in to OpenShift Cluster Manager and copy your API token. Set the USDOFFLINE_TOKEN variable using the copied API token by running the following command: USD export OFFLINE_TOKEN=<copied_api_token> Set the USDJWT_TOKEN variable using the previously set USDOFFLINE_TOKEN variable: USD export JWT_TOKEN=USD( curl \ --silent \ --header "Accept: application/json" \ --header "Content-Type: application/x-www-form-urlencoded" \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=USD{OFFLINE_TOKEN}" \ "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \ | jq --raw-output ".access_token" ) Note The JWT token is valid for 15 minutes only. Verification Optional: Check that you can access the API by running the following command: USD curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer USD{JWT_TOKEN}" | jq Example output { "release_tag": "v2.5.1", "versions": { "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175", "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223", "assisted-installer-service": "quay.io/app-sre/assisted-service:ac87f93", "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156" } } 10.1.3.2. Adding worker nodes using the Assisted Installer REST API You can add worker nodes to clusters using the Assisted Installer REST API. Prerequisites Install the OpenShift Cluster Manager CLI ( ocm ). Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Authenticate against the Assisted Installer REST API and generate a JSON web token (JWT) for your session. The generated JWT token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the single-node OpenShift cluster by running the following commands: Set the USDOPENSHIFT_CLUSTER_ID variable. Log in to the cluster and run the following command: USD export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDOPENSHIFT_CLUSTER_ID" '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": USDopenshift_cluster_id, "name": "<openshift_cluster_name>" 2 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, api.compute-1.example.com . 2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster worker node by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new worker node, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{JWT_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{JWT_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new worker node was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.30.3 compute-1.example.com Ready worker 11m v1.30.3 Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.4. Adding worker nodes to single-node OpenShift clusters manually You can add a worker node to a single-node OpenShift cluster manually by booting the worker node from Red Hat Enterprise Linux CoreOS (RHCOS) ISO and by using the cluster worker.ign file to join the new worker node to the cluster. Prerequisites Install a single-node OpenShift cluster on bare metal. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Set the OpenShift Container Platform version: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.17 Set the host architecture: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Get the worker.ign data from the running single-node cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Host the worker.ign file on a web server accessible from your network. Download the OpenShift Container Platform installer and make it available for use by running the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL: USD ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Use the RHCOS ISO and the hosted worker.ign file to install the worker node: Boot the target host with the RHCOS ISO and your preferred method of installation. When the target host has booted from the RHCOS ISO, open a console on the target host. If your local network does not have DHCP enabled, you need to create an ignition file with the new hostname and configure the worker node static IP address before running the RHCOS installation. Perform the following steps: Configure the worker host network connection with a static IP. Run the following command on the target host console: USD nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000 where: <static_ip> Is the host static IP address and CIDR, for example, 10.1.101.50/24 <network_gateway> Is the network gateway, for example, 10.1.101.1 Activate the modified network interface: USD nmcli con up <network_interface> Create a new ignition file new-worker.ign that includes a reference to the original worker.ign and an additional instruction that the coreos-installer program uses to populate the /etc/hostname file on the new worker host. For example: { "ignition":{ "version":"3.2.0", "config":{ "merge":[ { "source":"<hosted_worker_ign_file>" 1 } ] } }, "storage":{ "files":[ { "path":"/etc/hostname", "contents":{ "source":"data:,<new_fqdn>" 2 }, "mode":420, "overwrite":true, "path":"/etc/hostname" } ] } } 1 <hosted_worker_ign_file> is the locally accessible URL for the original worker.ign file. For example, http://webserver.example.com/worker.ign 2 <new_fqdn> is the new FQDN that you set for the worker node. For example, new-worker.example.com . Host the new-worker.ign file on a web server accessible from your network. Run the following coreos-installer command, passing in the ignition-url and hard disk details: USD sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition where: <new_worker_ign_file> is the locally accessible URL for the hosted new-worker.ign file, for example, http://webserver.example.com/new-worker.ign <hard_disk> Is the hard disk where you install RHCOS, for example, /dev/sda For networks that have DHCP enabled, you do not need to set a static IP. Run the following coreos-installer command from the target host console to install the system: USD coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk> To manually enable DHCP, apply the following NMStateConfig CR to the single-node OpenShift cluster: apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: "eth0" macAddress: "AA:BB:CC:DD:EE:11" Important The NMStateConfig CR is required for successful deployments of worker nodes with static IP addresses and for adding a worker node with a dynamic IP address if the single-node OpenShift was deployed with a static IP address. The cluster network DHCP does not automatically set these network settings for the new worker node. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation. When the install is complete, reboot the host. The host joins the cluster as a new worker node. Verification Check that the new worker node was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.30.3 compute-1.example.com Ready worker 11m v1.30.3 Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.5. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests
|
[
"export OFFLINE_TOKEN=<copied_api_token>",
"export JWT_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )",
"curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq",
"{ \"release_tag\": \"v2.5.1\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:ac87f93\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156\" } }",
"export API_URL=<api_url> 1",
"export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDOPENSHIFT_CLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDopenshift_cluster_id, \"name\": \"<openshift_cluster_name>\" 2 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{JWT_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.30.3 compute-1.example.com Ready worker 11m v1.30.3",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000",
"nmcli con up <network_interface>",
"{ \"ignition\":{ \"version\":\"3.2.0\", \"config\":{ \"merge\":[ { \"source\":\"<hosted_worker_ign_file>\" 1 } ] } }, \"storage\":{ \"files\":[ { \"path\":\"/etc/hostname\", \"contents\":{ \"source\":\"data:,<new_fqdn>\" 2 }, \"mode\":420, \"overwrite\":true, \"path\":\"/etc/hostname\" } ] } }",
"sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition",
"coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk>",
"apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"eth0\" macAddress: \"AA:BB:CC:DD:EE:11\"",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.30.3 compute-1.example.com Ready worker 11m v1.30.3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/nodes/worker-nodes-for-single-node-openshift-clusters
|
Chapter 2. Connecting to Kafka with Kamelets
|
Chapter 2. Connecting to Kafka with Kamelets Apache Kafka is an open-source, distributed, publish-subscribe messaging system for creating fault-tolerant, real-time data feeds. Kafka quickly stores and replicates data for a large number of consumers (external connections). Kafka can help you build solutions that process streaming events. A distributed, event-driven architecture requires a "backbone" that captures, communicates and helps process events. Kafka can serve as the communication backbone that connects your data sources and events to applications. You can use Kamelets to configure communication between Kafka and external resources. Kamelets allow you to configure how data moves from one endpoint to another in a Kafka stream-processing framework without writing code. Kamelets are route templates that you configure by specifying parameter values. For example, Kafka stores data in a binary form. You can use Kamelets to serialize and deserialize the data for sending to, and receiving from, external connections. With Kamelets, you can validate the schema and make changes to the data, such as adding to it, filtering it, or masking it. Kamelets can also handle and process errors. 2.1. Overview of connecting to Kafka with Kamelets If you use an Apache Kafka stream-processing framework, you can use Kamelets to connect services and applications to a Kafka topic. The Kamelet Catalog provides the following Kamelets specifically for making connections to a Kafka topic: kafka-sink - Moves events from a data producer to a Kafka topic. In a Kamelet Binding, specify the kafka-sink Kamelet as the sink. kafka-source - Moves events from a Kafka topic to a data consumer. In a Kamelet Binding, specify the kafka-source Kamelet as the source. Figure 2.1 illustrates the flow of connecting source and sink Kamelets to a Kafka topic. Figure 2.1: Data flow with Kamelets and a Kafka topic Here is an overview of the basic steps for using Kamelets and Kamelet Bindings to connect applications and services to a Kafka topic: Set up Kafka: Install the needed OpenShift operators. For OpenShift Streams for Apache Kafka, install the Camel K operator, the Camel K CLI, and the Red Hat OpenShift Application Services (RHOAS) CLI. For AMQ streams, install the Camel K and AMQ streams operators and the Camel K CLI. Create a Kafka instance. A Kafka instance operates as a message broker. A broker contains topics and orchestrates the storage and passing of messages. Create a Kafka topic. A topic provides a destination for the storage of data. Obtain Kafka authentication credentials. Determine which services or applications you want to connect to your Kafka topic. View the Kamelet Catalog to find the Kamelets for the source and sink components that you want to add to your integration. Also, determine the required configuration parameters for each Kamelet that you want to use. Create Kamelet Bindings: Create a Kamelet Binding that connects a data source (a component that produces data) to the Kafka topic (by using the kafka-sink Kamelet). Create a Kamelet Binding that connects the kafka topic (by using kafka-source Kamelet) to a data sink (a component that consumes data). Optionally, manipulate the data that passes between the Kafka topic and the data source or sink by adding one or more action Kamelets as intermediary steps within a Kamelet Binding. Optionally, define how to handle errors within a Kamelet Binding. Apply the Kamelet Bindings as resources to the project. The Camel K operator generates a separate Camel K integration for each Kamelet Binding. 2.2. Setting up Kafka To set up Kafka, you must: Install the required OpenShift operators Create a Kafka instance Create a Kafka topic Use the Red Hat product mentioned below to set up Kafka: Red Hat Advanced Message Queuing (AMQ) streams - A self-managed Apache Kafka offering. AMQ Streams is based on open source Strimzi and is included as part of Red Hat Integration . AMQ Streams is a distributed and scalable streaming platform based on Apache Kafka that includes a publish/subscribe messaging broker. Kafka Connect provides a framework to integrate Kafka-based systems with external systems. Using Kafka Connect, you can configure source and sink connectors to stream data from external systems into and out of a Kafka broker. 2.2.1. Setting up Kafka by using AMQ streams AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. 2.2.1.1. Preparing your OpenShift cluster for AMQ Streams To use Camel K or Kamelets and Red Hat AMQ Streams, you must install the following operators and tools: Red Hat Integration - AMQ Streams operator - Manages the communication between your Openshift Cluster and AMQ Streams for Apache Kafka instances. Red Hat Integration - Camel K operator - Installs and manages Camel K - a lightweight integration framework that runs natively in the cloud on OpenShift. Camel K CLI tool - Allows you to access all Camel K features. Prerequisites You are familiar with Apache Kafka concepts. You can access an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install the OpenShift and the Camel K CLI on your local system. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. Procedure To set up Kafka by using AMQ Streams: Log in to your OpenShift cluster's web console. Create or open a project in which you plan to create your integration, for example my-camel-k-kafka . Install the Camel K operator and Camel K CLI as described in Installing Camel K . Install the AMQ streams operator: From any project, select Operators > OperatorHub . In the Filter by Keyword field, type AMQ Streams . Click the Red Hat Integration - AMQ Streams card and then click Install . The Install Operator page opens. Accept the defaults and then click Install . Select Operators > Installed Operators to verify that the Camel K and AMQ Streams operators are installed. steps Setting up a Kafka topic with AMQ Streams 2.2.1.2. Setting up a Kafka topic with AMQ Streams A Kafka topic provides a destination for the storage of data in a Kafka instance. You must set up a Kafka topic before you can send data to it. Prerequisites You can access an OpenShift cluster. You installed the Red Hat Integration - Camel K and Red Hat Integration - AMQ Streams operators as described in Preparing your OpenShift cluster . You installed the OpenShift CLI ( oc ) and the Camel K CLI ( kamel ). Procedure To set up a Kafka topic by using AMQ Streams: Log in to your OpenShift cluster's web console. Select Projects and then click the project in which you installed the Red Hat Integration - AMQ Streams operator. For example, click the my-camel-k-kafka project. Select Operators > Installed Operators and then click Red Hat Integration - AMQ Streams . Create a Kafka cluster: Under Kafka , click Create instance . Type a name for the cluster, for example kafka-test . Accept the other defaults and then click Create . The process to create the Kafka instance might take a few minutes to complete. When the status is ready, continue to the step. Create a Kafka topic: Select Operators > Installed Operators and then click Red Hat Integration - AMQ Streams . Under Kafka Topic , click Create Kafka Topic . Type a name for the topic, for example test-topic . Accept the other defaults and then click Create . 2.2.2. Setting up Kafka by using OpenShift streams To use OpenShift Streams for Apache Kafka, you must be logged into your Red Hat account. 2.2.2.1. Preparing your OpenShift cluster for OpenShift Streams To use managed cloud service, you must install the following operators and tools: OpenShift Application Services (RHOAS) CLI - Allows you to manage your application services from a terminal. Red Hat Integration - Camel K operator Installs and manages Camel K - a lightweight integration framework that runs natively in the cloud on OpenShift. Camel K CLI tool - Allows you to access all Camel K features. Prerequisites You are familiar with Apache Kafka concepts. You can access an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install the OpenShift and Apache Camel K CLI on your local system. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. Procedure Log in to your OpenShift web console with a cluster admin account. Create the OpenShift project for your Camel K or Kamelets application. Select Home > Projects . Click Create Project . Type the name of the project, for example my-camel-k-kafka , then click Create . Download and install the RHOAS CLI as described in Getting started with the rhoas CLI . Install the Camel K operator and Camel K CLI as described in Installing Camel K . To verify that the Red Hat Integration - Camel K operator is installed, click Operators > Installed Operators . step Setting up a Kafka topic with RHOAS 2.2.2.2. Setting up a Kafka topic with RHOAS Kafka organizes messages around topics . Each topic has a name. Applications send messages to topics and retrieve messages from topics. A Kafka topic provides a destination for the storage of data in a Kafka instance. You must set up a Kafka topic before you can send data to it. Prerequisites You can access an OpenShift cluster with the correct access level, the ability to create projects and install operators, and the ability to install the OpenShift and the Camel K CLI on your local system. You installed the OpenShift CLI ( oc ) , the Camel K CLI ( kamel ) , and RHOAS CLI ( rhoas ) tools as described in Preparing your OpenShift cluster . You installed the Red Hat Integration - Camel K operator as described in Preparing your OpenShift cluster . You are logged in to the Red Hat Cloud site . Procedure To set up a Kafka topic: From the command line, log in to your OpenShift cluster. Open your project, for example: oc project my-camel-k-kafka Verify that the Camel K operator is installed in your project: oc get csv The result lists the Red Hat Camel K operator and indicates that it is in the Succeeded phase. Prepare and connect a Kafka instance to RHOAS: Login to the RHOAS CLI by using this command: rhoas login Create a kafka instance, for example kafka-test : rhoas kafka create kafka-test The process to create the Kafka instance might take a few minutes to complete. To check the status of your Kafka instance: rhoas status You can also view the status in the web console: https://cloud.redhat.com/application-services/streams/kafkas/ When the status is ready , continue to the step. Create a new Kafka topic: rhoas kafka topic create --name test-topic Connect your Kafka instance (cluster) with the Openshift Application Services instance: rhoas cluster connect Follow the script instructions for obtaining a credential token. You should see output similar to the following: step Obtaining Kafka credentials 2.2.2.3. Obtaining Kafka credentials To connect your applications or services to a Kafka instance, you must first obtain the following Kafka credentials: Obtain the bootstrap URL. Create a service account with credentials (username and password). For OpenShift Streams, the authentication protocol is SASL_SSL. Prerequisite You have created a Kafka instance, and it has a ready status. You have created a Kafka topic. Procedure Obtain the Kafka Broker URL (Bootstrap URL): rhoas status This command returns output similar to the following: To obtain a username and password, create a service account by using the following syntax: rhoas service-account create --name "<account-name>" --file-format json Note When creating a service account, you can choose the file format and location to save the credentials. For more information, type rhoas service-account create --help For example: rhoas service-account create --name "my-service-acct" --file-format json The service account is created and saved to a JSON file. To verify your service account credentials, view the credentials.json file: cat credentials.json This command returns output similar to the following: Grant permission for sending and receiving messages to or from the Kakfa topic. Use the following command, where clientID is the value provided in the credentials.json file (from Step 3). For example: 2.3. Connecting a data source to a Kafka topic in a Kamelet Binding To connect a data source to a Kafka topic, you create a Kamelet Binding as illustrated in Figure 2.2 . Figure 2.2 Connecting a data source to a Kafka topic Prerequisites You know the name of the Kafka topic to which you want to send events. The example in this procedure uses test-topic for receiving events. You know the values of the following parameters for your Kafka instance: bootstrapServers - A comma separated list of Kafka Broker URLs. password - The password to authenticate to Kafka. For OpenShift Streams, this is the password in the credentials.json file. For an unauthenticated kafka instance on AMQ Streams, you can specify any non-empty string. user - The user name to authenticate to Kafka. For OpenShift Streams, this is the clientID in the credentials.json file. For an unauthenticated kafka instance on AMQ Streams, you can specify any non-empty string. For information on how to obtain these values when you use OpenShift Streams, see Obtaining Kafka credentials . securityProtocol - You know the security protocol for communicating with the Kafka brokers. For a Kafka cluster on OpenShift Streams, it is SASL_SSL (the default). For a Kafka cluster on AMQ streams, it is PLAINTEXT . You know which Kamelets you want to add to your Camel K integration and the required instance parameters. The example Kamelets for this procedure are: The coffee-source Kamelet - It has an optional parameter, period , that specifies how often to send each event. You can copy the code from Example source Kamelet to a file named coffee-source.kamelet.yaml file and then run the following command to add it as a resource to your namespace: oc apply -f coffee-source.kamelet.yaml The kafka-sink Kamelet provided in the Kamelet Catalog. You use the kafka-sink Kamelet because the Kafka topic is receiving data (it is the data consumer) in this binding. Procedure To connect a data source to a Kafka topic, create a Kamelet Binding: In an editor of your choice, create a YAML file with the following basic structure: Add a name for the Kamelet Binding. For this example, the name is coffees-to-kafka because the binding connects the coffee-source Kamelet to the kafka-sink Kamelet. For the Kamelet Binding's source, specify a data source Kamelet (for example, the coffee-source Kamelet produces events that contain data about coffee) and configure any parameters for the Kamelet. For the Kamelet Binding's sink, specify the kafka-sink Kamelet and its required properties. For example, when the Kafka cluster is on OpenShift Streams: For the user property, specify the clientID , for example: srvc-acct-eb575691-b94a-41f1-ab97-50ade0cd1094 For the password property, specify the password , for example: facf3df1-3c8d-4253-aa87-8c95ca5e1225 You do not need to set the securityProtocol property. For another example, when the Kafka cluster is on AMQ Streams, set the securityProtocol property to "PLAINTEXT" : Save the YAML file (for example, coffees-to-kafka.yaml ). Log into your OpenShift project. Add the Kamelet Binding as a resource to your OpenShift namespace: oc apply -f <kamelet binding filename> For example: oc apply -f coffees-to-kafka.yaml The Camel K operator generates and runs a Camel K integration by using the KameletBinding resource. It might take a few minutes to build. To see the status of the KameletBinding resource: oc get kameletbindings To see the status of their integrations: oc get integrations To view the integration's log: kamel logs <integration> -n <project> For example: kamel logs coffees-to-kafka -n my-camel-k-kafka See Also Applying operations to data within a Kafka connection Handling errors within a connection Connecting a Kafka topic to a data sink in a Kamelet Binding 2.4. Connecting a Kafka topic to a data sink in a Kamelet Binding To connect a Kafka topic to a data sink, you create a Kamelet Binding as illustrated in Figure 2.3 . Figure 2.3 Connecting a Kafka topic to a data sink Prerequisites You know the name of the Kafka topic from which you want to send events. The example in this procedure uses test-topic for sending events. It is the same topic that you used to receive events from the coffee source in Connecting a data source to a Kafka topic in a Kamelet Binding . You know the values of the following parameters for your Kafka instance: bootstrapServers - A comma separated list of Kafka Broker URLs. password - The password to authenticate to Kafka. user - The user name to authenticate to Kafka. For information on how to obtain these values when you use OpenShift Streams, see Obtaining Kafka credentials . You know the security protocol for communicating with the Kafka brokers. For a Kafka cluster on OpenShift Streams, it is SASL_SSL (the default). For a Kafka cluster on AMQ streams, it is PLAINTEXT . You know which Kamelets you want to add to your Camel K integration and the required instance parameters. The example Kamelets for this procedure are provided in the Kamelet Catalog: The kafka-source Kamelet - Use the kafka-source Kamelet because the Kafka topic is sending data (it is the data producer) in this binding. The example values for the required parameters are: bootstrapServers - "broker.url:9092" password - "testpassword" user - "testuser" topic - "test-topic" securityProtocol - For a Kafka cluster on OpenShift Streams, you do not need to set this parameter because SASL_SSL is the default value. For a Kafka cluster on AMQ streams, this parameter value is "PLAINTEXT" . The log-sink Kamelet - Use the log-sink to log the data that it receives from the kafka-source Kamelet. Optionally, specify the showStreams parameter to show the message body of the data. The log-sink Kamelet is useful for debugging purposes. Procedure To connect a Kafka topic to a data sink, create a Kamelet Binding: In an editor of your choice, create a YAML file with the following basic structure: Add a name for the Kamelet Binding. For this example, the name is kafka-to-log because the binding connects the kafka-source Kamelet to the log-sink Kamelet. For the Kamelet Binding's source, specify the kafka-source Kamelet and configure its parameters. For example, when the Kafka cluster is on OpenShift Streams (you do not need to set the securityProtocol parameter): For example, when the Kafka cluster is on AMQ Streams you must set the securityProtocol parameter to "PLAINTEXT" : For the Kamelet Binding's sink, specify the data consumer Kamelet (for example, the log-sink Kamelet) and configure any parameters for the Kamelet, for example: Save the YAML file (for example, kafka-to-log.yaml ). Log into your OpenShift project. Add the Kamelet Binding as a resource to your OpenShift namespace: oc apply -f <kamelet binding filename> For example: oc apply -f kafka-to-log.yaml The Camel K operator generates and runs a Camel K integration by using the KameletBinding resource. It might take a few minutes to build. To see the status of the KameletBinding resource: oc get kameletbindings To see the status of their integrations: oc get integrations To view the integration's log: kamel logs <integration> -n <project> For example: kamel logs kafka-to-log -n my-camel-k-kafka In the output, you should see coffee events, for example: To stop a running integration, delete the associated Kamelet Binding resource: oc delete kameletbindings/<kameletbinding-name> For example: oc delete kameletbindings/kafka-to-log See also Applying operations to data within a Kafka connection Adding an error handler policy to a Kamelet Binding 2.5. Applying operations to data within a Kafka connection If you want to perform an operation on the data that passes between a Kamelet and a Kafka topic, use action Kamelets as intermediary steps within a Kamelet Binding. Applying operations to data within a connection Routing event data to different destination topics Filtering event data for a specific Kafka topic 2.5.1. Routing event data to different destination topics When you configure a connection to a Kafka instance, you can optionally transform the topic information from the event data so that the event is routed to a different Kafka topic. Use one of the following transformation action Kamelets: Regex Router - Modify the topic of a message by using a regular expression and a replacement string. For example, if you want to remove a topic prefix, add a prefix, or remove part of a topic name. Configure the Regex Router Action Kamelet ( regex-router-action ). TimeStamp - Modify the topic of a message based on the original topic and the message's timestamp. For example, when using a sink that needs to write to different tables or indexes based on timestamps. For example, when you want to write events from Kafka to Elasticsearch, but each event needs to go to a different index based on information in the event itself. Configure the Timestamp Router Action Kamelet ( timestamp-router-action ). Message TimeStamp - Modify the topic of a message based on the original topic value and the timestamp field coming from a message value field. Configure the Message Timestamp Router Action Kamelet ( message-timestamp-router-action ). Predicate - Filter events based on the given JSON path expression by configuring the Predicate Filter Action Kamelet ( predicate-filter-action ). Prerequisites You have created a Kamelet Binding in which the sink is a kafka-sink Kamelet, as described in Connecting a data source to a Kafka topic in a Kamelet Binding . You know which type of transformation you want to add to the Kamelet Binding. Procedure To transform the destination topic, use one of the transformation action Kamelets as an intermediary step within the Kamelet Binding. For details on how to add an action Kamelet to a Kamelet Binding, see Adding an operation to a Kamelet Binding . 2.5.2. Filtering event data for a specific Kafka topic If you use a source Kamelet that produces records to many different Kafka topics and you want to filter out the records to one Kafka topic, add the topic-name-matches-filter-action Kamelet as an intermediary step in the Kamelet Binding. Prerequisites You have created a Kamelet Binding in a YAML file. You know the name of the Kafka topic from which you want to filter out event data. Procedure Edit the Kamelet Binding to include the topic-name-matches-filter-action Kamelet as an intermediary step between the source and sink Kamelets. Typically, you use the kafka-source Kamelet, as the source Kamelet and you supply a topic as the value of the required topic parameter. In the following Kamelet Binding example, the kafka-source Kamelet specifies the test-topic, test-topic-2, and test-topic-3 Kafka topics and the topic-name-matches-filter-action Kamelet specifies to filter out the event data from the topic-test topic: If you want to filter topics coming from a source Kamelet other than the kafka-source Kamelet, you must supply the Kafka topic information. You can use the insert-header-action Kamelet to add a Kafka topic field as an intermediary step, before the topic-name-matches-filter-action step in the Kamelet Binding as shown in the following example: Save the Kamelet Binding YAML file.
|
[
"Token Secret \"rh-cloud-services-accesstoken-cli\" created successfully Service Account Secret \"rh-cloud-services-service-account\" created successfully KafkaConnection resource \"kafka-test\" has been created KafkaConnection successfully installed on your cluster.",
"Kafka --------------------------------------------------------------- ID: 1ptdfZRHmLKwqW6A3YKM2MawgDh Name: my-kafka Status: ready Bootstrap URL: my-kafka--ptdfzrhmlkwqw-a-ykm-mawgdh.kafka.devshift.org:443",
"{\"clientID\":\"srvc-acct-eb575691-b94a-41f1-ab97-50ade0cd1094\", \"password\":\"facf3df1-3c8d-4253-aa87-8c95ca5e1225\"}",
"rhoas kafka acl grant-access --producer --consumer --service-account USDCLIENT_ID --topic test-topic --group all",
"rhoas kafka acl grant-access --producer --consumer --service-account srvc-acct-eb575691-b94a-41f1-ab97-50ade0cd1094 --topic test-topic --group all",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-kafka spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-kafka spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-kafka spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"my-kafka--ptdfzrhmlkwqw-a-ykm-mawgdh.kafka.devshift.org:443\" password: \"facf3df1-3c8d-4253-aa87-8c95ca5e1225\" topic: \"test-topic\" user: \"srvc-acct-eb575691-b94a-41f1-ab97-50ade0cd1094\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-kafka spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"broker.url:9092\" password: \"testpassword\" topic: \"test-topic\" user: \"testuser\" securityProtocol: \"PLAINTEXT\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-to-log spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-to-log spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: \"broker.url:9092\" password: \"testpassword\" topic: \"test-topic\" user: \"testuser\" sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-to-log spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: \"broker.url:9092\" password: \"testpassword\" topic: \"test-topic\" user: \"testuser\" securityProtocol: \"PLAINTEXT\" sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-to-log spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: \"broker.url:9092\" password: \"testpassword\" topic: \"test-topic\" user: \"testuser\" securityProtocol: \"PLAINTEXT\" // only for AMQ streams sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: log-sink properties: showStreams: true",
"INFO [log-sink-E80C5C904418150-0000000000000001] (Camel (camel-1) thread #0 - timer://tick) {\"id\":7259,\"uid\":\"a4ecb7c2-05b8-4a49-b0d2-d1e8db5bc5e2\",\"blend_name\":\"Postmodern Symphony\",\"origin\":\"Huila, Colombia\",\"variety\":\"Kona\",\"notes\":\"delicate, chewy, black currant, red apple, star fruit\",\"intensifier\":\"balanced\"}",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-to-log-by-topic spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-source properties: bootstrapServers: \"broker.url:9092\" password: \"testpassword\" topic: \"test-topic, test-topic-2, test-topic-3\" user: \"testuser\" securityProtocol: \"PLAINTEXT\" // only for AMQ streams steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: topic-name-matches-filter-action properties: regex: \"test-topic\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: log-sink properties: showStreams: true",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffee-to-log-by-topic spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"KAFKA.topic\" value: \"test-topic\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: topic-name-matches-filter-action properties: regex: \"test-topic\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: log-sink properties: showStreams: true"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/integrating_applications_with_kamelets/connecting-to-kafka-kamelets
|
Chapter 9. jdbc
|
Chapter 9. jdbc 9.1. jdbc:ds-create 9.1.1. Description Create a JDBC datasource config for pax-jdbc-config from a DataSourceFactory 9.1.2. Syntax jdbc:ds-create [options] name 9.1.3. Arguments Name Description name The JDBC datasource name 9.1.4. Options Name Description -p, --password The database password --help Display this help message -dt, --databaseType The database type (ConnectionPoolDataSource, XADataSource or DataSource) -dbName Database name to use -dn, --driverName org.osgi.driver.name property of the DataSourceFactory -url The JDBC URL to use -dc, --driverClass org.osgi.driver.class property of the DataSourceFactory -u, --username The database username 9.2. jdbc:ds-delete 9.2.1. Description Delete a JDBC datasource 9.2.2. Syntax jdbc:ds-delete [options] name 9.2.3. Arguments Name Description name The JDBC datasource name (the one used at creation time) 9.2.4. Options Name Description --help Display this help message 9.3. jdbc:ds-factories 9.3.1. Description List the JDBC DataSourceFactories 9.3.2. Syntax jdbc:ds-factories [options] 9.3.3. Options Name Description --help Display this help message 9.4. jdbc:ds-info 9.4.1. Description Display details about a JDBC datasource 9.4.2. Syntax jdbc:ds-info [options] datasource 9.4.3. Arguments Name Description datasource The JDBC datasource name 9.4.4. Options Name Description --help Display this help message 9.5. jdbc:ds-list 9.5.1. Description List the JDBC datasources 9.5.2. Syntax jdbc:ds-list [options] 9.5.3. Options Name Description --help Display this help message 9.6. jdbc:execute 9.6.1. Description Execute a SQL command on a given JDBC datasource 9.6.2. Syntax jdbc:execute [options] datasource command 9.6.3. Arguments Name Description datasource The JDBC datasource command The SQL command to execute 9.6.4. Options Name Description --help Display this help message 9.7. jdbc:query 9.7.1. Description Execute a SQL query on a JDBC datasource 9.7.2. Syntax jdbc:query [options] datasource query 9.7.3. Arguments Name Description datasource The JDBC datasource to use query The SQL query to execute 9.7.4. Options Name Description --help Display this help message 9.8. jdbc:tables 9.8.1. Description List the tables on a given JDBC datasource 9.8.2. Syntax jdbc:tables [options] datasource 9.8.3. Arguments Name Description datasource The JDBC datasource to use 9.8.4. Options Name Description --help Display this help message
| null |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/jdbc
|
19.5. Adding a Remote Connection
|
19.5. Adding a Remote Connection This procedure covers how to set up a connection to a remote system using virt-manager . To create a new connection open the File menu and select the Add Connection menu item. The Add Connection wizard appears. Select the hypervisor. For Red Hat Enterprise Linux 7, systems select QEMU/KVM . Select Local for the local system or one of the remote connection options and click Connect . This example uses Remote tunnel over SSH, which works on default installations. For more information on configuring remote connections, see Chapter 18, Remote Management of Guests Figure 19.10. Add Connection Enter the root password for the selected host when prompted. A remote host is now connected and appears in the main virt-manager window. Figure 19.11. Remote host in the main virt-manager window
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guests_with_the_virtual_machine_manager_virt_manager-adding_a_remote_connection
|
Chapter 28. Using Metering on AMQ Streams
|
Chapter 28. Using Metering on AMQ Streams You can use the Metering tool that is available on OpenShift to generate metering reports from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available. Using Prometheus as a default data source, you can generate reports on pods, namespaces, and most other OpenShift resources. You can also use the OpenShift Metering operator to analyze your installed AMQ Streams components to determine whether you are in compliance with your Red Hat subscription. To use metering with AMQ Streams, you must first install and configure the Metering operator on OpenShift Container Platform. 28.1. Metering resources Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. Metering is managed using the following CRDs: Table 28.1. Metering resources Name Description MeteringConfig Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack. Reports Controls what query to use, when, and how often the query should be run, and where to store the results. ReportQueries Contains the SQL queries used to perform analysis on the data contained within ReportDataSources . ReportDataSources Controls the data available to ReportQueries and Reports. Allows configuring access to different databases for use within metering. 28.2. Metering labels for AMQ Streams The following table lists the metering labels for AMQ Streams infrastructure components and integrations. Table 28.2. Metering Labels Label Possible values com.company Red_Hat rht.prod_name Red_Hat_Application_Foundations rht.prod_ver 2023.Q3 rht.comp AMQ_Streams rht.comp_ver 2.5 rht.subcomp Infrastructure cluster-operator entity-operator topic-operator user-operator zookeeper Application kafka-broker kafka-connect kafka-connect-build kafka-mirror-maker2 kafka-mirror-maker cruise-control kafka-bridge kafka-exporter drain-cleaner rht.subcomp_t infrastructure application Examples Infrastructure example (where the infrastructure component is entity-operator ) com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2023.Q3 rht.comp=AMQ_Streams rht.comp_ver=2.5 rht.subcomp=entity-operator rht.subcomp_t=infrastructure Application example (where the integration deployment name is kafka-bridge ) com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2023.Q3 rht.comp=AMQ_Streams rht.comp_ver=2.5 rht.subcomp=kafka-bridge rht.subcomp_t=application
|
[
"com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2023.Q3 rht.comp=AMQ_Streams rht.comp_ver=2.5 rht.subcomp=entity-operator rht.subcomp_t=infrastructure",
"com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2023.Q3 rht.comp=AMQ_Streams rht.comp_ver=2.5 rht.subcomp=kafka-bridge rht.subcomp_t=application"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/using-metering-str
|
14.5. Deleting a Snapper Snapshot
|
14.5. Deleting a Snapper Snapshot To delete a snapshot: You can use the list command to verify that the snapshot was successfully deleted.
|
[
"snapper -c config_name delete snapshot_number"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/snapper-delete
|
Installing on bare metal
|
Installing on bare metal OpenShift Container Platform 4.16 Installing OpenShift Container Platform on bare metal Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_bare_metal/index
|
Chapter 14. Backup and restore
|
Chapter 14. Backup and restore 14.1. Backup and restore by using VM snapshots You can back up and restore virtual machines (VMs) by using snapshots. Snapshots are supported by the following storage providers: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API Online snapshots have a default time deadline of five minutes ( 5m ) that can be changed, if needed. Important Online snapshots are supported for virtual machines that have hot plugged virtual disks. However, hot plugged disks that are not in the virtual machine specification are not included in the snapshot. To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent if it is not included with your operating system. The QEMU guest agent is included with the default Red Hat templates. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 14.1.1. About snapshots A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a development version. A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state). When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken. The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. You can perform the following snapshot actions: Create a new snapshot List all snapshots attached to a specific VM Restore a VM from a snapshot Delete an existing VM snapshot VM snapshot controller and custom resources The VM snapshot feature introduces three new API objects defined as custom resource definitions (CRDs) for managing snapshots: VirtualMachineSnapshot : Represents a user request to create a snapshot. It contains information about the current state of the VM. VirtualMachineSnapshotContent : Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. VirtualMachineRestore : Represents a user request to restore a VM from a snapshot. The VM snapshot controller binds a VirtualMachineSnapshotContent object with the VirtualMachineSnapshot object for which it was created, with a one-to-one mapping. 14.1.2. Creating snapshots You can create snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line. 14.1.2.1. Creating a snapshot by using the web console You can create a snapshot of a virtual machine (VM) by using the OpenShift Container Platform web console. The VM snapshot includes disks that meet the following requirements: Either a data volume or a persistent volume claim Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. If the VM is running, click the options menu and select Stop to power it down. Click the Snapshots tab and then click Take Snapshot . Enter the snapshot name. Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot. If your VM has disks that cannot be included in the snapshot and you wish to proceed, select I am aware of this warning and wish to proceed . Click Save . 14.1.2.2. Creating a snapshot by using the command line You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot object. Prerequisites Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots. Install the OpenShift CLI ( oc ). Optional: Power down the VM for which you want to create a snapshot. Procedure Create a YAML file to define a VirtualMachineSnapshot object that specifies the name of the new VirtualMachineSnapshot and the name of the source VM as in the following example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> Create the VirtualMachineSnapshot object: USD oc create -f <snapshot_name>.yaml The snapshot controller creates a VirtualMachineSnapshotContent object, binds it to the VirtualMachineSnapshot , and updates the status and readyToUse fields of the VirtualMachineSnapshot object. Optional: If you are taking an online snapshot, you can use the wait command and monitor the status of the snapshot: Enter the following command: USD oc wait <vm_name> <snapshot_name> --for condition=Ready Verify the status of the snapshot: InProgress - The online snapshot operation is still in progress. Succeeded - The online snapshot operation completed successfully. Failed - The online snapshot operaton failed. Note Online snapshots have a default time deadline of five minutes ( 5m ). If the snapshot does not complete successfully in five minutes, the status is set to failed . Afterwards, the file system will be thawed and the VM unfrozen but the status remains failed until you delete the failed snapshot image. To change the default time deadline, add the FailureDeadline attribute to the VM snapshot spec with the time designated in minutes ( m ) or in seconds ( s ) that you want to specify before the snapshot operation times out. To set no deadline, you can specify 0 , though this is generally not recommended, as it can result in an unresponsive VM. If you do not specify a unit of time such as m or s , the default is seconds ( s ). Verification Verify that the VirtualMachineSnapshot object is created and bound with VirtualMachineSnapshotContent and that the readyToUse flag is set to true : USD oc describe vmsnapshot <snapshot_name> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False" 1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True" 2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 1 The status field of the Progressing condition specifies if the snapshot is still being created. 2 The status field of the Ready condition specifies if the snapshot creation process is complete. 3 Specifies if the snapshot is ready to be used. 4 Specifies that the snapshot is bound to a VirtualMachineSnapshotContent object created by the snapshot controller. Check the spec:volumeBackups property of the VirtualMachineSnapshotContent resource to verify that the expected PVCs are included in the snapshot. 14.1.3. Verifying online snapshots by using snapshot indications Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation. Prerequisites You must have attempted to create an online VM snapshot. Procedure Display the output from the snapshot indications by performing one of the following actions: Use the command line to view indicator output in the status stanza of the VirtualMachineSnapshot object YAML. In the web console, click VirtualMachineSnapshot Status in the Snapshot details screen. Verify the status of your online VM snapshot by viewing the values of the status.indications parameter: Online indicates that the VM was running during online snapshot creation. GuestAgent indicates that the QEMU guest agent was running during online snapshot creation. NoGuestAgent indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error. 14.1.4. Restoring virtual machines from snapshots You can restore virtual machines (VMs) from snapshots by using the OpenShift Container Platform web console or the command line. 14.1.4.1. Restoring a VM from a snapshot by using the web console You can restore a virtual machine (VM) to a configuration represented by a snapshot in the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. If the VM is running, click the options menu and select Stop to power it down. Click the Snapshots tab to view a list of snapshots associated with the VM. Select a snapshot to open the Snapshot Details screen. Click the options menu and select Restore VirtualMachineSnapshot . Click Restore . 14.1.4.2. Restoring a VM from a snapshot by using the command line You can restore an existing virtual machine (VM) to a configuration by using the command line. You can only restore from an offline VM snapshot. Prerequisites Power down the VM you want to restore. Procedure Create a YAML file to define a VirtualMachineRestore object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source as in the following example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name> Create the VirtualMachineRestore object: USD oc create -f <vm_restore>.yaml The snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. Verification Verify that the VM is restored to the state represented by the snapshot and that the complete flag is set to true : USD oc get vmrestore <vm_restore> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" 2 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1 1 Specifies if the process of restoring the VM to the state represented by the snapshot is complete. 2 The status field of the Progressing condition specifies if the VM is still being restored. 3 The status field of the Ready condition specifies if the VM restoration process is complete. 14.1.5. Deleting snapshots You can delete snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line. 14.1.5.1. Deleting a snapshot by using the web console You can delete an existing virtual machine (VM) snapshot by using the web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click the Snapshots tab to view a list of snapshots associated with the VM. Click the options menu beside a snapshot and select Delete VirtualMachineSnapshot . Click Delete . 14.1.5.2. Deleting a virtual machine snapshot in the CLI You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Delete the VirtualMachineSnapshot object: USD oc delete vmsnapshot <snapshot_name> The snapshot controller deletes the VirtualMachineSnapshot along with the associated VirtualMachineSnapshotContent object. Verification Verify that the snapshot is deleted and no longer attached to this VM: USD oc get vmsnapshot 14.1.6. Additional resources CSI Volume Snapshots 14.2. Backing up and restoring virtual machines Important Red Hat supports using OpenShift Virtualization 4.14 or later with OADP 1.3.x or later. OADP versions earlier than 1.3.0 are not supported for back up and restore of OpenShift Virtualization. Back up and restore virtual machines by using the OpenShift API for Data Protection . You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. You can then install the Data Protection Application. Note OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options: Container Storage Interface (CSI) backups Container Storage Interface (CSI) backups with DataMover The following storage options are excluded: File system backup and restore Volume snapshot backup and restore For more information, see Backing up applications with File System Backup: Kopia or Restic . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 14.2.1. Installing and configuring OADP with OpenShift Virtualization As a cluster administrator, you install OADP by installing the OADP Operator. The latest version of the OADP Operator installs Velero 1.14 . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins. Back up virtual machines by creating a Backup custom resource (CR). Warning Red Hat support is limited to only the following options: CSI backups CSI backups with DataMover. You restore the Backup CR by creating a Restore CR. Additional resources OADP plugins Backup custom resource (CR) Restore CR Using Operator Lifecycle Manager on restricted networks 14.2.2. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The kubevirt plugin is mandatory for OpenShift Virtualization. 3 Specify the plugin for the backup provider, for example, gcp , if it exists. 4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available .
|
[
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>",
"oc create -f <snapshot_name>.yaml",
"oc wait <vm_name> <snapshot_name> --for condition=Ready",
"oc describe vmsnapshot <snapshot_name>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>",
"oc create -f <vm_restore>.yaml",
"oc get vmrestore <vm_restore>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1",
"oc delete vmsnapshot <snapshot_name>",
"oc get vmsnapshot",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/backup-and-restore
|
Chapter 20. Configuring Network Encryption in Red Hat Gluster Storage
|
Chapter 20. Configuring Network Encryption in Red Hat Gluster Storage Network encryption is the process of converting data into a cryptic format or code so that it can be securely transmitted on a network. Encryption prevents unauthorized use of the data. Red Hat Gluster Storage supports network encryption using TLS/SSL. When network encryption is enabled, Red Hat Gluster Storage uses TLS/SSL for authentication and authorization, in place of the authentication framework that is used for non-encrypted connections. The following types of encryption are supported: I/O encryption Encryption of the I/O connections between the Red Hat Gluster Storage clients and servers. Management encryption Encryption of management ( glusterd ) connections within a trusted storage pool, and between glusterd and NFS Ganesha or SMB clients. Network encryption is configured in the following files: /etc/ssl/glusterfs.pem Certificate file containing the system's uniquely signed TLS certificate. This file is unique for each system and must not be shared with others. /etc/ssl/glusterfs.key This file contains the system's unique private key. This file must not be shared with others. /etc/ssl/glusterfs.ca This file contains the certificates of the Certificate Authorities (CA) who have signed the certificates. The glusterfs.ca file must be identical on all servers in the trusted pool, and must contain the certificates of the signing CA for all servers and all clients. All clients should also have a .ca file that contains the certificates of the signing CA for all the servers. Red Hat Gluster Storage does not use the global CA certificates that come with the system, so you need to either create your own self-signed certificates, or create certificates and have them signed by a Certificate Authority. If you are using self-signed certificates, the CA file for the servers is a concatenation of the relevant .pem files of every server and every client. The client CA file is a concatenation of the certificate files of every server. /var/lib/glusterd/secure-access This file is required for management encryption. It enables encryption on the management ( glusterd ) connections between glusterd of all servers and the connection between clients, and contains any configuration required by the Certificate Authority. The glusterd service of all servers uses this file to fetch volfiles and notify the clients with the volfile changes. This file must be present on all servers and all clients for management encryption to work correctly. It can be empty, but most configurations require at least one line to set the certificate depth ( transport.socket.ssl-cert-depth ) required by the Certificate Authority. 20.1. Preparing Certificates To configure network encryption, each server and client needs a signed certificate and a private key. There are two options for certificates. Self-signed certificate Generating and signing the certificate yourself. Certificate Authority (CA) signed certificate Generating the certificate and then requesting that a Certificate Authority sign it. Both of these options ensure that data transmitted over the network cannot be accessed by a third party, but certificates signed by a Certificate Authority imply an added level of trust and verification to a customer using your storage. Procedure 20.1. Preparing a self-signed certificate Generate and sign certificates for each server and client Perform the following steps on each server and client. Generate a private key for this machine Generate a self-signed certificate for this machine The following command generates a signed certificate that expires in 365 days, instead of the default 30 days. Provide a short name for this machine in place of COMMONNAME . This is generally a hostname, FQDN, or IP address. Generate client-side certificate authority lists From the first server, concatenate the /etc/ssl/glusterfs.pem files from all servers into a single file called glusterfs.ca , and place this file in the /etc/ssl directory on all clients. For example, running the following commands from server1 creates a certificate authority list ( .ca file) that contains the certificates ( .pem files) of two servers, and copies the certificate authority list ( .ca file) to three clients. Generate server-side glusterfs.ca files From the first server, append the certificates ( /etc/ssl/glusterfs.pem files) from all clients to the end of the certificate authority list ( /etc/ssl/glusterfs.ca file) generated in the step. For example, running the following commands from server1 appends the certificates ( .pem files) of three clients to the certificate authority list ( .ca file) on server1 , and then copies that certificate authority list ( .ca file) to one other server. Verify server certificates Run the following command in the /etc/ssl directory on the servers to verify the certificate on that machine against the Certificate Authority list. Your certificate is correct if the output of this command is glusterfs.pem: OK . Note This process does not work for self-signed client certificates. Procedure 20.2. Preparing a Common Certificate Authority certificate Perform the following steps on each server and client you wish to authorize. Generate a private key Generate a certificate signing request The following command generates a certificate signing request for a certificate that expires in 365 days, instead of the default 30 days. Provide a short name for this machine in place of COMMONNAME . This is generally a hostname, FQDN, or IP address. Send the generated glusterfs.csr file to your Certificate Authority Your Certificate Authority provides a signed certificate for this machine in the form of a .pem file, and the certificates of the Certificate Authority in the form of a .ca file. Place the .pem file provided by the Certificate Authority Ensure that the .pem file is called glusterfs.pem . Place this file in the /etc/ssl directory of this server only. Place the .ca file provided by the Certificate Authority Ensure that the .ca file is called glusterfs.ca . Place the .ca file in the /etc/ssl directory of all servers. Verify your certificates Run the following command in the /etc/ssl directory on all clients and servers to verify the certificate on that machine against the Certificate Authority list. Your certificate is correct if the output of this command is glusterfs.pem: OK .
|
[
"openssl genrsa -out /etc/ssl/glusterfs.key 2048",
"openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj \"/CN= COMMONNAME \" -days 365 -out /etc/ssl/glusterfs.pem",
"cat /etc/ssl/glusterfs.pem > /etc/ssl/glusterfs.ca ssh user@server2 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca client1:/etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca client2:/etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca client3:/etc/ssl/glusterfs.ca",
"ssh user@client1 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca ssh user@client2 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca ssh user@client3 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca server2:/etc/ssl/glusterfs.ca",
"openssl verify -verbose -CAfile glusterfs.ca glusterfs.pem",
"openssl genrsa -out /etc/ssl/glusterfs.key 2048",
"openssl req -new -sha256 -key /etc/ssl/glusterfs.key -subj '/CN=<COMMONNAME>' -days 365 -out glusterfs.csr",
"openssl verify -verbose -CAfile glusterfs.ca glusterfs.pem"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-network_encryption
|
Chapter 8. Configuring external user groups
|
Chapter 8. Configuring external user groups Satellite does not associate external users with their user group automatically. You must create a user group with the same name as in the external source on Satellite. Members of the external user group then automatically become members of the Satellite user group and receive the associated permissions. The configuration of external user groups depends on the type of external authentication. To assign additional permissions to an external user, add this user to an internal user group that has no external mapping specified. Then assign the required roles to this group. Prerequisites If you use an LDAP server, configure Satellite to use LDAP authentication. For more information, see Chapter 6, Configuring an LDAP server as an external identity provider for Satellite . When using external user groups from an LDAP source, you cannot use the USDlogin variable as a substitute for the account user name. You must use either an anonymous or dedicated service user. If you use a Identity Management or AD server, configure Satellite to use Identity Management or AD authentication. For more information, see Configuring External Authentication in Installing Satellite Server in a connected network environment . Ensure that at least one external user authenticates for the first time. Retain a copy of the external group names you want to use. To find the group membership of external users, enter the following command: Procedure In the Satellite web UI, navigate to Administer > User Groups , and click Create User Group . Specify the name of the new user group. Do not select any users to avoid adding users automatically when you refresh the external user group. Click the Roles tab and select the roles you want to assign to the user group. Alternatively, select the Administrator checkbox to assign all available permissions. Click the External groups tab, then click Add external user group , and select an authentication source from the Auth source drop-down menu. Specify the exact name of the external group in the Name field. Click Submit .
|
[
"id username"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_authentication_for_red_hat_satellite_users/configuring_external_user_groups_authentication
|
Chapter 9. Saga EIP
|
Chapter 9. Saga EIP 9.1. Overview The Saga EIP provides a way to define a series of related actions in a Camel route that can be either completed successfully or not executed or compensated. Saga implementations coordinate distributed services communicating using any transport towards a globally consistent outcome. Saga EIPs are different from classical ACID distributed (XA) transactions because the status of the different participating services is guaranteed to be consistent only at the end of the Saga and not in any intermediate step. Saga EIPs are suitable for the use cases where usage of distributed transactions is discouraged. For example, services participating in a Saga are allowed to use any kind of datastore, such as classical databases or even NoSQL non-transactional datastores. They are also suitable for being used in stateless cloud services as they do not require a transaction log to be stored alongside the service. Saga EIPs are also not required to be completed in a small amount of time, because they don't use database level locks, which is different from transactions. Hence they can live for a longer time span, from few seconds to several days. Saga EIPs do not use locks on data. Instead they define the concept of Compensating Action, which is an action that should be executed when the standard flow encounters an error, with the purpose of restoring the status that was present before the flow execution. Compensating actions can be declared in Camel routes using the Java or XML DSL and are invoked by Camel only when needed (if the saga is canceled due to an error). 9.2. Saga EIP Options The Saga EIP supports 6 options which are listed below: Name Description Default Type propagation Set the Saga propagation mode (REQUIRED, REQUIRES_NEW, MANDATORY, SUPPORTS, NOT_SUPPORTED, NEVER). REQUIRED SagaPropagation completionMode Determine how the Saga should be considered complete. When set to AUTO , the Saga is completed when the exchange that initiates the Saga is processed successfully, or compensated when it completes exceptionally. When set to MANUAL , the user must complete or compensate the Saga using the saga:complete or saga:compensate endpoints. AUTO SagaCompletionMode timeoutInMilliseconds Set the maximum amount of time for the Saga. After the timeout is expired, the saga is compensated automatically (unless a different decision has been taken in the meantime). Long compensation The compensation endpoint URI that must be called to compensate all changes done in the route. The route corresponding to the compensation URI must perform compensation and complete without error. If error occurs during compensation, the Saga service calls the compensation URI again to retry. SagaActionUriDefinition completion The completion endpoint URI that is called when the Saga is completed successfully. The route corresponding to the completion URI must perform completion tasks and terminate without error. If error occurs during completion, the Saga service calls the completion URI again to retry. SagaActionUriDefinition option Allows to save properties of the current exchange in order to reuse them in a compensation or completion callback route. Options are usually helpful, for example, to store and retrieve identifiers of objects that are deleted in compensating actions. Option values are transformed into input headers of the compensation/completion exchange. List 9.3. Saga Service Configuration The Saga EIP requires that a service implementing the interface org.apache.camel.saga.CamelSagaService is added to the Camel context. Camel currently supports the following Saga Service: InMemorySagaService : This is a basic implementation of the Saga EIP that does not support advanced features (no remote context propagation, no consistency guarantee in case of application failure). 9.3.1. Using the In-Memory Saga Service The In-memory Saga service is not recommended for production environments as it does not support persistence of the Saga status (it is kept only in-memory), so it cannot guarantee consistency of the Saga EIPs in case of application failure (for example, JVM crash). Also, when using a in-memory Saga service, Saga contexts cannot be propagated to remote services using transport-level headers (it can be done with other implementations). You can add the following code to customize the Camel context when you want to use the in-memory saga service. The service belongs to the camel-core module. context.addService(new org.apache.camel.impl.saga.InMemorySagaService()); 9.4. Examples For example, you want to place a new order and you have two distinct services in your system: one managing the orders and one managing the credit. Logically you can place a order if you have enough credit for it. With the Saga EIP you can model the direct:buy route as a Saga composed of two distinct actions, one to create the order and one to take the credit. Both actions must be executed, or none of them as an order placed without credit can be considered a inconsistent outcome (as well as a payment without an order). from("direct:buy") .saga() .to("direct:newOrder") .to("direct:reserveCredit"); The buy action does not change for the rest of the examples. Different options that are used to model the New Order and Reserve Credit action are as follows: from("direct:newOrder") .saga() .propagation(SagaPropagation.MANDATORY) .compensation("direct:cancelOrder") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, "newOrder") .log("Order USD{body} created"); Here the propagation mode is set to MANDATORY meaning that any exchange flowing in this route must be already part of a Saga (and it is the case in this example, since the Saga is created in the direct:buy route). The direct:newOrder route declares a compensating action that is called direct:cancelOrder , responsible for undoing the order in case the Saga is canceled. Each exchange always contains a Exchange.SAGA_LONG_RUNNING_ACTION header that is used here as the id of the order. This identifies the order to delete in the corresponding compensating action, but it is not a requirement (options can be used as alternative solution). The compensating action of direct:newOrder is direct:cancelOrder and it is shown below: from("direct:cancelOrder") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, "cancelOrder") .log("Order USD{body} cancelled"); It is called automatically by the Saga EIP implementation when the order should be cancelled. It does not terminate with an error. In case an error is thrown in the direct:cancelOrder route, the EIP implementation should periodically retry to execute the compensating action up to a certain limit. This means that any compensating action must be idempotent , so it should take into account that it may be triggered multiple times and should not fail in any case. If compensation cannot be done after all retries, a manual intervention process should be triggered by the Saga implementation. Note It may happen that due to a delay in the execution of the direct:newOrder route the Saga is cancelled by another party in the meantime (due to an error in a parallel route or a timeout at Saga level). So, when the compensating action direct:cancelOrder is called, it may not find the Order record that is cancelled. It is important, in order to guarantee full global consistency, that any main action and its corresponding compensating action are commutative , for example, if compensation occurs before the main action it should have the same effect. Another possible approach, when using a commutative behavior is not possible, is to consistently fail in the compensating action until data produced by the main action is found (or the maximum number of retries is exhausted). This approach may work in many contexts, but it's heuristic . The credit service is implemented almost in the same way as the order service. from("direct:reserveCredit") .saga() .propagation(SagaPropagation.MANDATORY) .compensation("direct:refundCredit") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(creditService, "reserveCredit") .log("Credit USD{header.amount} reserved in action USD{body}"); Call on compensation action: from("direct:refundCredit") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(creditService, "refundCredit") .log("Credit for action USD{body} refunded"); Here the compensating action for a credit reservation is a refund. 9.4.1. Handling Completion Events Some type of processing is required when the Saga is completed. Compensation endpoints are invoked when something wrong happens and the Saga is cancelled. The completion endpoints can be invoked to do further processing when the Saga is completed successfully. For example, in the order service above, we may need to know when the order is completed (and the credit reserved) to actually start preparing the order. We do not want to start to prepare the order if the payment is not done (unlike most modern CPUs that give you access to reserved memory before ensuring that you have rights to read it). This can be done easily with a modified version of the direct:newOrder endpoint: Invoke completeion endpoint: from("direct:newOrder") .saga() .propagation(SagaPropagation.MANDATORY) .compensation("direct:cancelOrder") .completion("direct:completeOrder") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, "newOrder") .log("Order USD{body} created"); The direct:cancelOrder is the same as in the example. Call on the successful completion as follows: from("direct:completeOrder") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, "findExternalId") .to("jms:prepareOrder") .log("Order USD{body} sent for preparation"); When the Saga is completed, the order is sent to a JMS queue for preparation. Like compensating actions, also completion actions may be called multiple times by the Saga coordinator (especially in case of errors, like network errors). In this example, the service listening to the prepareOrder JMS queue is prepared to hold possible duplicates (see the Idempotent Consumer EIP for examples on how to handle duplicates). 9.4.2. Using Custom Identifiers and Options You can use Saga options to register custom identifiers. For example, the credit service is refactored as follows: Generate a custom ID and set it in the body as follows: from("direct:reserveCredit") .bean(idService, "generateCustomId") .to("direct:creditReservation") Delegate action and mark the current body as needed in the compensating action. from("direct:creditReservation") .saga() .propagation(SagaPropagation.SUPPORTS) .option("CreditId", body()) .compensation("direct:creditRefund") .bean(creditService, "reserveCredit") .log("Credit USD{header.amount} reserved. Custom Id used is USD{body}"); Retrieve the CreditId option from the headers only if the saga is cancelled. from("direct:creditRefund") .transform(header("CreditId")) // retrieve the CreditId option from headers .bean(creditService, "refundCredit") .log("Credit for Custom Id USD{body} refunded"); The direct:creditReservation endpoint can be called outside of the Saga, by setting the propagation mode to SUPPORTS . This way multiple options can be declared in a Saga route. 9.4.3. Setting Timeouts Setting timeouts on Saga EIPs guarantees that a Saga does not remain stuck forever in the case of machine failure. The Saga EIP implementation has a default timeout set on all Saga EIPs that do not specify it explicitly. When the timeout expires, the Saga EIP will decide to cancel the Saga (and compensate all participants), unless a different decision has been taken before. Timeouts can be set on Saga participants as follows: from("direct:newOrder") .saga() .timeout(1, TimeUnit.MINUTES) // newOrder requires that the saga is completed within 1 minute .propagation(SagaPropagation.MANDATORY) .compensation("direct:cancelOrder") .completion("direct:completeOrder") // ... .log("Order USD{body} created"); All participants (for example, credit service, order service) can set their own timeout. The minimum value of those timeouts is taken as timeout for the saga when they are composed together. A timeout can also be specified at the Saga level as follows: from("direct:buy") .saga() .timeout(5, TimeUnit.MINUTES) // timeout at saga level .to("direct:newOrder") .to("direct:reserveCredit"); 9.4.4. Choosing Propagation In the examples above, we have used the MANDATORY and SUPPORTS propagation modes, but also the REQUIRED propagation mode, that is the default propagation used when nothing else is specified. These propagation modes map 1:1 the equivalent modes used in transactional contexts. Propagation Description REQUIRED Join the existing Saga or create a new one if it does not exist. REQUIRES_NEW Always create a new Saga. Suspend the old Saga and resume it when the new one terminates. MANDATORY A Saga must be already present. The existing Saga is joined. SUPPORTS If a Saga already exists, then join it. NOT_SUPPORTED If a Saga already exists, it is suspended and resumed when the current block completes. NEVER The current block must never be invoked within a Saga. 9.4.5. Using Manual Completion (Advanced) When a Saga cannot be all executed in a synchronous way, but it requires, for example, communication with external services using asynchronous communication channels, then the completion mode cannot be set to AUTO (default), because the Saga is not completed when the exchange that creates it is done. This is often the case for the Saga EIPs that have long execution times (hours, days). In these cases, the MANUAL completion mode should be used. from("direct:mysaga") .saga() .completionMode(SagaCompletionMode.MANUAL) .completion("direct:finalize") .timeout(2, TimeUnit.HOURS) .to("seda:newOrder") .to("seda:reserveCredit"); Add the asynchronous processing for seda:newOrder and seda:reserveCredit. These send the asynchronous callbacks to seda:operationCompleted. from("seda:operationCompleted") // an asynchronous callback .saga() .propagation(SagaPropagation.MANDATORY) .bean(controlService, "actionExecuted") .choice() .when(body().isEqualTo("ok")) .to("saga:complete") // complete the current saga manually (saga component) .end() You can add the direct:finalize endpoint to execute final actions. Setting the completion mode to MANUAL means that the Saga is not completed when the exchange is processed in the route direct:mysaga but it will last longer (max duration is set to 2 hours). When both asynchronous actions are completed the Saga is completed. The call to complete is done using the Camel Saga Component's saga:complete endpoint. There is a similar endpoint for manually compensating the Saga ( saga:compensate ). 9.5. XML Configuration Saga features are available for users that want to use the XML configuration. The following snippet shows an example: <route> <from uri="direct:start"/> <saga> <compensation uri="direct:compensation" /> <completion uri="direct:completion" /> <option optionName="myOptionKey"> <constant>myOptionValue</constant> </option> <option optionName="myOptionKey2"> <constant>myOptionValue2</constant> </option> </saga> <to uri="direct:action1" /> <to uri="direct:action2" /> </route>
|
[
"context.addService(new org.apache.camel.impl.saga.InMemorySagaService());",
"from(\"direct:buy\") .saga() .to(\"direct:newOrder\") .to(\"direct:reserveCredit\");",
"from(\"direct:newOrder\") .saga() .propagation(SagaPropagation.MANDATORY) .compensation(\"direct:cancelOrder\") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, \"newOrder\") .log(\"Order USD{body} created\");",
"from(\"direct:cancelOrder\") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, \"cancelOrder\") .log(\"Order USD{body} cancelled\");",
"from(\"direct:reserveCredit\") .saga() .propagation(SagaPropagation.MANDATORY) .compensation(\"direct:refundCredit\") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(creditService, \"reserveCredit\") .log(\"Credit USD{header.amount} reserved in action USD{body}\");",
"from(\"direct:refundCredit\") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(creditService, \"refundCredit\") .log(\"Credit for action USD{body} refunded\");",
"from(\"direct:newOrder\") .saga() .propagation(SagaPropagation.MANDATORY) .compensation(\"direct:cancelOrder\") .completion(\"direct:completeOrder\") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, \"newOrder\") .log(\"Order USD{body} created\");",
"from(\"direct:completeOrder\") .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION) .bean(orderManagerService, \"findExternalId\") .to(\"jms:prepareOrder\") .log(\"Order USD{body} sent for preparation\");",
"from(\"direct:reserveCredit\") .bean(idService, \"generateCustomId\") .to(\"direct:creditReservation\")",
"from(\"direct:creditReservation\") .saga() .propagation(SagaPropagation.SUPPORTS) .option(\"CreditId\", body()) .compensation(\"direct:creditRefund\") .bean(creditService, \"reserveCredit\") .log(\"Credit USD{header.amount} reserved. Custom Id used is USD{body}\");",
"from(\"direct:creditRefund\") .transform(header(\"CreditId\")) // retrieve the CreditId option from headers .bean(creditService, \"refundCredit\") .log(\"Credit for Custom Id USD{body} refunded\");",
"from(\"direct:newOrder\") .saga() .timeout(1, TimeUnit.MINUTES) // newOrder requires that the saga is completed within 1 minute .propagation(SagaPropagation.MANDATORY) .compensation(\"direct:cancelOrder\") .completion(\"direct:completeOrder\") // .log(\"Order USD{body} created\");",
"from(\"direct:buy\") .saga() .timeout(5, TimeUnit.MINUTES) // timeout at saga level .to(\"direct:newOrder\") .to(\"direct:reserveCredit\");",
"from(\"direct:mysaga\") .saga() .completionMode(SagaCompletionMode.MANUAL) .completion(\"direct:finalize\") .timeout(2, TimeUnit.HOURS) .to(\"seda:newOrder\") .to(\"seda:reserveCredit\");",
"from(\"seda:operationCompleted\") // an asynchronous callback .saga() .propagation(SagaPropagation.MANDATORY) .bean(controlService, \"actionExecuted\") .choice() .when(body().isEqualTo(\"ok\")) .to(\"saga:complete\") // complete the current saga manually (saga component) .end()",
"<route> <from uri=\"direct:start\"/> <saga> <compensation uri=\"direct:compensation\" /> <completion uri=\"direct:completion\" /> <option optionName=\"myOptionKey\"> <constant>myOptionValue</constant> </option> <option optionName=\"myOptionKey2\"> <constant>myOptionValue2</constant> </option> </saga> <to uri=\"direct:action1\" /> <to uri=\"direct:action2\" /> </route>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/saga-eip
|
Chapter 10. TokenReview [authentication.k8s.io/v1]
|
Chapter 10. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenReviewSpec is a description of the token authentication request. status object TokenReviewStatus is the result of the token authentication request. 10.1.1. .spec Description TokenReviewSpec is a description of the token authentication request. Type object Property Type Description audiences array (string) Audiences is a list of the identifiers that the resource server presented with the token identifies as. Audience-aware token authenticators will verify that the token was intended for at least one of the audiences in this list. If no audiences are provided, the audience will default to the audience of the Kubernetes apiserver. token string Token is the opaque bearer token. 10.1.2. .status Description TokenReviewStatus is the result of the token authentication request. Type object Property Type Description audiences array (string) Audiences are audience identifiers chosen by the authenticator that are compatible with both the TokenReview and token. An identifier is any identifier in the intersection of the TokenReviewSpec audiences and the token's audiences. A client of the TokenReview API that sets the spec.audiences field should validate that a compatible audience identifier is returned in the status.audiences field to ensure that the TokenReview server is audience aware. If a TokenReview returns an empty status.audience field where status.authenticated is "true", the token is valid against the audience of the Kubernetes API server. authenticated boolean Authenticated indicates that the token was associated with a known user. error string Error indicates that the token couldn't be checked user object UserInfo holds the information about the user needed to implement the user.Info interface. 10.1.3. .status.user Description UserInfo holds the information about the user needed to implement the user.Info interface. Type object Property Type Description extra object Any additional information provided by the authenticator. extra{} array (string) groups array (string) The names of groups this user is a part of. uid string A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs. username string The name that uniquely identifies this user among all active users. 10.1.4. .status.user.extra Description Any additional information provided by the authenticator. Type object 10.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/tokenreviews POST : create a TokenReview /apis/authentication.k8s.io/v1/tokenreviews POST : create a TokenReview 10.2.1. /apis/oauth.openshift.io/v1/tokenreviews Table 10.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a TokenReview Table 10.2. Body parameters Parameter Type Description body TokenReview schema Table 10.3. HTTP responses HTTP code Reponse body 200 - OK TokenReview schema 201 - Created TokenReview schema 202 - Accepted TokenReview schema 401 - Unauthorized Empty 10.2.2. /apis/authentication.k8s.io/v1/tokenreviews Table 10.4. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a TokenReview Table 10.5. Body parameters Parameter Type Description body TokenReview schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK TokenReview schema 201 - Created TokenReview schema 202 - Accepted TokenReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/tokenreview-authentication-k8s-io-v1
|
Chapter 2. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1]
|
Chapter 2. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1] Description AdminPolicyBasedExternalRoute is a CRD allowing the cluster administrators to configure policies for external gateway IPs to be applied to all the pods contained in selected namespaces. Egress traffic from the pods that belong to the selected namespaces to outside the cluster is routed through these external gateway IPs. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute status object AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. 2.1.1. .spec Description AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute Type object Required from nextHops Property Type Description from object From defines the selectors that will determine the target namespaces to this CR. nextHops object NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. 2.1.2. .spec.from Description From defines the selectors that will determine the target namespaces to this CR. Type object Required namespaceSelector Property Type Description namespaceSelector object NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR 2.1.3. .spec.from.namespaceSelector Description NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.4. .spec.from.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.5. .spec.from.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.6. .spec.nextHops Description NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. Type object Property Type Description dynamic array DynamicHops defines a slices of DynamicHop. This field is optional. dynamic[] object DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. static array StaticHops defines a slice of StaticHop. This field is optional. static[] object StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. 2.1.7. .spec.nextHops.dynamic Description DynamicHops defines a slices of DynamicHop. This field is optional. Type array 2.1.8. .spec.nextHops.dynamic[] Description DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. Type object Required namespaceSelector podSelector Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. namespaceSelector object NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. networkAttachmentName string NetworkAttachmentName determines the multus network name to use when retrieving the pod IPs that will be used as the gateway IP. When this field is empty, the logic assumes that the pod is configured with HostNetwork and is using the node's IP as gateway. podSelector object PodSelector defines the selector to filter the pods that are external gateways. 2.1.9. .spec.nextHops.dynamic[].namespaceSelector Description NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.10. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.11. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.12. .spec.nextHops.dynamic[].podSelector Description PodSelector defines the selector to filter the pods that are external gateways. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.13. .spec.nextHops.dynamic[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.14. .spec.nextHops.dynamic[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.15. .spec.nextHops.static Description StaticHops defines a slice of StaticHop. This field is optional. Type array 2.1.16. .spec.nextHops.static[] Description StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. Type object Required ip Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. ip string IP defines the static IP to be used for egress traffic. The IP can be either IPv4 or IPv6. 2.1.17. .status Description AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. Type object Property Type Description lastTransitionTime string Captures the time when the last change was applied. messages array (string) An array of Human-readable messages indicating details about the status of the object. status string A concise indication of whether the AdminPolicyBasedRoute resource is applied with success 2.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes DELETE : delete collection of AdminPolicyBasedExternalRoute GET : list objects of kind AdminPolicyBasedExternalRoute POST : create an AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} DELETE : delete an AdminPolicyBasedExternalRoute GET : read the specified AdminPolicyBasedExternalRoute PATCH : partially update the specified AdminPolicyBasedExternalRoute PUT : replace the specified AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status GET : read status of the specified AdminPolicyBasedExternalRoute PATCH : partially update status of the specified AdminPolicyBasedExternalRoute PUT : replace status of the specified AdminPolicyBasedExternalRoute 2.2.1. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes HTTP method DELETE Description delete collection of AdminPolicyBasedExternalRoute Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AdminPolicyBasedExternalRoute Table 2.2. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRouteList schema 401 - Unauthorized Empty HTTP method POST Description create an AdminPolicyBasedExternalRoute Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 202 - Accepted AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 2.2.2. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute HTTP method DELETE Description delete an AdminPolicyBasedExternalRoute Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AdminPolicyBasedExternalRoute Table 2.9. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AdminPolicyBasedExternalRoute Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AdminPolicyBasedExternalRoute Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 2.2.3. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute HTTP method GET Description read status of the specified AdminPolicyBasedExternalRoute Table 2.16. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AdminPolicyBasedExternalRoute Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AdminPolicyBasedExternalRoute Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/adminpolicybasedexternalroute-k8s-ovn-org-v1
|
Chapter 58. Implementing the Interceptors Processing Logic
|
Chapter 58. Implementing the Interceptors Processing Logic Abstract Interceptors are straightforward to implement. The bulk of their processing logic is in the handleMessage() method. This method receives the message data and manipulates it as needed. Developers may also want to add some special logic to handle fault processing cases. 58.1. Interceptor Flow Figure 58.1, "Flow through an interceptor" shows the process flow through an interceptor. Figure 58.1. Flow through an interceptor In normal message processing, only the handleMessage() method is called. The handleMessage() method is where the interceptor's message processing logic is placed. If an error occurs in the handleMessage() method of the interceptor, or any subsequent interceptor in the interceptor chain, the handleFault() method is called. The handleFault() method is useful for cleaning up after an interceptor in the event of an error. It can also be used to alter the fault message. 58.2. Processing messages Overview In normal message processing, an interceptor's handleMessage() method is invoked. It receives that message data as a Message object. Along with the actual contents of the message, the Message object may contain a number of properties related to the message or the message processing state. The exact contents of the Message object depends on the interceptors preceding the current interceptor in the chain. Getting the message contents The Message interface provides two methods that can be used in extracting the message contents: public <T> T getContent java.lang.Class<T> format The getContent() method returns the content of the message in an object of the specified class. If the contents are not available as an instance of the specified class, null is returned. The list of available content types is determined by the interceptor's location on the interceptor chain and the direction of the interceptor chain. public Collection<Attachment> getAttachments The getAttachments() method returns a Java Collection object containing any binary attachments associated with the message. The attachments are stored in org.apache.cxf.message.Attachment objects. Attachment objects provide methods for managing the binary data. Important Attachments are only available after the attachment processing interceptors have executed. Determining the message's direction The direction of a message can be determined by querying the message exchange. The message exchange stores the inbound message and the outbound message in separate properties. [3] The message exchange associated with a message is retrieved using the message's getExchange() method. As shown in Example 58.1, "Getting the message exchange" , getExchange() does not take any parameters and returns the message exchange as a org.apache.cxf.message.Exchange object. Example 58.1. Getting the message exchange Exchange getExchange The Exchange object has four methods, shown in Example 58.2, "Getting messages from a message exchange" , for getting the messages associated with an exchange. Each method will either return the message as a org.apache.cxf.Message object or it will return null if the message does not exist. Example 58.2. Getting messages from a message exchange Message getInMessage Message getInFaultMessage Message getOutMessage Message getOutFaultMessage Example 58.3, "Checking the direction of a message chain" shows code for determining if the current message is outbound. The method gets the message exchange and checks to see if the current message is the same as the exchange's outbound message. It also checks the current message against the exchanges outbound fault message to error messages on the outbound fault interceptor chain. Example 58.3. Checking the direction of a message chain Example Example 58.4, "Example message processing method" shows code for an interceptor that processes zip compressed messages. It checks the direction of the message and then performs the appropriate actions. Example 58.4. Example message processing method 58.3. Unwinding after an error Overview When an error occurs during the execution of an interceptor chain, the runtime stops traversing the interceptor chain and unwinds the chain by calling the handleFault() method of any interceptors in the chain that have already been executed. The handleFault() method can be used to clean up any resources used by an interceptor during normal message processing. It can also be used to rollback any actions that should only stand if message processing completes successfully. In cases where the fault message will be passed on to an outbound fault processing interceptor chain, the handleFault() method can also be used to add information to the fault message. Getting the message payload The handleFault() method receives the same Message object as the handleMessage() method used in normal message processing. Getting the message contents from the Message object is described in the section called "Getting the message contents" . Example Example 58.5, "Handling an unwinding interceptor chain" shows code used to ensure that the original XML stream is placed back into the message when the interceptor chain is unwound. Example 58.5. Handling an unwinding interceptor chain [3] It also stores inbound and outbound faults separately.
|
[
"public static boolean isOutbound() { Exchange exchange = message.getExchange(); return message != null && exchange != null && (message == exchange.getOutMessage() || message == exchange.getOutFaultMessage()); }",
"import java.io.IOException; import java.io.InputStream; import java.util.zip.GZIPInputStream; import org.apache.cxf.message.Message; import org.apache.cxf.phase.AbstractPhaseInterceptor; import org.apache.cxf.phase.Phase; public class StreamInterceptor extends AbstractPhaseInterceptor<Message> { public void handleMessage(Message message) { boolean isOutbound = false; isOutbound = message == message.getExchange().getOutMessage() || message == message.getExchange().getOutFaultMessage(); if (!isOutbound) { try { InputStream is = message.getContent(InputStream.class); GZIPInputStream zipInput = new GZIPInputStream(is); message.setContent(InputStream.class, zipInput); } catch (IOException ioe) { ioe.printStackTrace(); } } else { // zip the outbound message } } }",
"@Override public void handleFault(SoapMessage message) { super.handleFault(message); XMLStreamWriter writer = (XMLStreamWriter)message.get(ORIGINAL_XML_WRITER); if (writer != null) { message.setContent(XMLStreamWriter.class, writer); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFInterceptorImpl
|
Appendix D. Modifying and Enforcing Cluster Service Resource Actions
|
Appendix D. Modifying and Enforcing Cluster Service Resource Actions Actions are to a resource agent what method invocations are to an object in the object oriented approach to programming - a form of a contract between the provider of the functionality (resource agent) and its user (RGManager), technically posing as an interface (API). RGManager relies on resource agents recognizing a set of a few actions that are part of this contract. Actions can have properties with the respective defaults declared in the same way as a set of actions the particular agent supports; that is, in its metadata (themselves obtainable by means of the mandatory meta-data action). You can override these defaults with explicit <action> specifications in the cluster.conf file: For the start and stop actions, you may want to modify the timeout property. For the status action, you may want to modify the timeout , interval , or depth properties. For information on the status action and its properties, see Section D.1, "Modifying the Resource Status Check Interval" . Note that the timeout property for an action is enforced only if you explicitly configure timeout enforcement, as described in Section D.2, "Enforcing Resource Timeouts" . For an example of how to configure the cluster.conf file to modify a resource action parameter, see Section D.1, "Modifying the Resource Status Check Interval" . Note To fully comprehend the information in this appendix, you may require detailed understanding of resource agents and the cluster configuration file, /etc/cluster/cluster.conf . For a comprehensive list and description of cluster.conf elements and attributes, see the cluster schema at /usr/share/cluster/cluster.rng , and the annotated schema at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html (for example /usr/share/doc/cman-3.0.12/cluster_conf.html ). D.1. Modifying the Resource Status Check Interval RGManager checks the status of individual resources, not whole services. Every 10 seconds, RGManager scans the resource tree, looking for resources that have passed their "status check" interval. Each resource agent specifies the amount of time between periodic status checks. Each resource utilizes these timeout values unless explicitly overridden in the cluster.conf file using the special <action> tag: <action name="status" depth="*" interval="10" /> This tag is a special child of the resource itself in the cluster.conf file. For example, if you had a file system resource for which you wanted to override the status check interval you could specify the file system resource in the cluster.conf file as follows: Some agents provide multiple "depths" of checking. For example, a normal file system status check (depth 0) checks whether the file system is mounted in the correct place. A more intensive check is depth 10, which checks whether you can read a file from the file system. A status check of depth 20 checks whether you can write to the file system. In the example given here, the depth is set to * , which indicates that these values should be used for all depths. The result is that the test file system is checked at the highest-defined depth provided by the resource-agent (in this case, 20) every 10 seconds.
|
[
"<fs name=\"test\" device=\"/dev/sdb3\"> <action name=\"status\" depth=\"*\" interval=\"10\" /> <nfsexport...> </nfsexport> </fs>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ap-status-check-CA
|
Chapter 14. Encrypting etcd data
|
Chapter 14. Encrypting etcd data 14.1. About etcd encryption By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted: Secrets Config maps Routes OAuth access tokens OAuth authorize tokens When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup. Note Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted. If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a state of etcd from the respective etcd snapshot. 14.2. Enabling etcd encryption You can enable etcd encryption to encrypt sensitive resources in your cluster. Warning Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted. After you enable etcd encryption, several changes can occur: The etcd encryption might affect the memory consumption of a few resources. You might notice a transient affect on backup performance because the leader must serve the backup. A disk I/O can affect the node that receives the backup state. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to aescbc : spec: encryption: type: aescbc 1 1 The aescbc type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption. Save the file to apply the changes. The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd encryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: routes.route.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: secrets, configmaps If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. 14.3. Disabling etcd encryption You can disable encryption of etcd data in your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to identity : spec: encryption: type: identity 1 1 The identity type is the default value and means that no encryption is performed. Save the file to apply the changes. The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd decryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again.
|
[
"oc edit apiserver",
"spec: encryption: type: aescbc 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: routes.route.openshift.io",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: secrets, configmaps",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io",
"oc edit apiserver",
"spec: encryption: type: identity 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_and_compliance/encrypting-etcd
|
Chapter 15. Using the partition reassignment tool
|
Chapter 15. Using the partition reassignment tool When scaling a Kafka cluster, you may need to add or remove brokers and update the distribution of partitions or the replication factor of topics. To update partitions and topics, you can use the kafka-reassign-partitions.sh tool. You can change the replication factor of a topic using the kafka-reassign-partitions.sh tool. The tool can also be used to reassign partitions and balance the distribution of partitions across brokers to improve performance. However, it is recommended to use Cruise Control for automated partition reassignments and cluster rebalancing . Cruise Control can move topics from one broker to another without any downtime, and it is the most efficient way to reassign partitions. The AMQ Streams Cruise Control integration does not support changing the replication factor of a topic. 15.1. Partition reassignment tool overview The partition reassignment tool provides the following capabilities for managing Kafka partitions and brokers: Redistributing partition replicas Scale your cluster up and down by adding or removing brokers, and move Kafka partitions from heavily loaded brokers to under-utilized brokers. To do this, you must create a partition reassignment plan that identifies which topics and partitions to move and where to move them. Cruise Control is recommended for this type of operation as it automates the cluster rebalancing process . Scaling topic replication factor up and down Increase or decrease the replication factor of your Kafka topics. To do this, you must create a partition reassignment plan that identifies the existing replication assignment across partitions and an updated assignment with the replication factor changes. Changing the preferred leader Change the preferred leader of a Kafka partition. This can be useful if the current preferred leader is unavailable or if you want to redistribute load across the brokers in the cluster. To do this, you must create a partition reassignment plan that specifies the new preferred leader for each partition by changing the order of replicas. Changing the log directories to use a specific JBOD volume Change the log directories of your Kafka brokers to use a specific JBOD volume. This can be useful if you want to move your Kafka data to a different disk or storage device. To do this, you must create a partition reassignment plan that specifies the new log directory for each topic. 15.1.1. Generating a partition reassignment plan The partition reassignment tool ( kafka-reassign-partitions.sh ) works by generating a partition assignment plan that specifies which partitions should be moved from their current broker to a new broker. If you are satisfied with the plan, you can execute it. The tool then does the following: Migrates the partition data to the new broker Updates the metadata on the Kafka brokers to reflect the new partition assignments Triggers a rolling restart of the Kafka brokers to ensure that the new assignments take effect The partition reassignment tool has three different modes: --generate Takes a set of topics and brokers and generates a reassignment JSON file which will result in the partitions of those topics being assigned to those brokers. Because this operates on whole topics, it cannot be used when you only want to reassign some partitions of some topics. --execute Takes a reassignment JSON file and applies it to the partitions and brokers in the cluster. Brokers that gain partitions as a result become followers of the partition leader. For a given partition, once the new broker has caught up and joined the ISR (in-sync replicas) the old broker will stop being a follower and will delete its replica. --verify Using the same reassignment JSON file as the --execute step, --verify checks whether all the partitions in the file have been moved to their intended brokers. If the reassignment is complete, --verify also removes any traffic throttles ( --throttle ) that are in effect. Unless removed, throttles will continue to affect the cluster even after the reassignment has finished. It is only possible to have one reassignment running in a cluster at any given time, and it is not possible to cancel a running reassignment. If you must cancel a reassignment, wait for it to complete and then perform another reassignment to revert the effects of the first reassignment. The kafka-reassign-partitions.sh will print the reassignment JSON for this reversion as part of its output. Very large reassignments should be broken down into a number of smaller reassignments in case there is a need to stop in-progress reassignment. 15.1.2. Specifying topics in a partition reassignment JSON file The kafka-reassign-partitions.sh tool uses a reassignment JSON file that specifies the topics to reassign. You can generate a reassignment JSON file or create a file manually if you want to move specific partitions. A basic reassignment JSON file has the structure presented in the following example, which describes three partitions belonging to two Kafka topics. Each partition is reassigned to a new set of replicas, which are identified by their broker IDs. The version , topic , partition , and replicas properties are all required. Example partition reassignment JSON file structure 1 The version of the reassignment JSON file format. Currently, only version 1 is supported, so this should always be 1. 2 An array that specifies the partitions to be reassigned. 3 The name of the Kafka topic that the partition belongs to. 4 The ID of the partition being reassigned. 5 An ordered array of the IDs of the brokers that should be assigned as replicas for this partition. The first broker in the list is the leader replica. Note Partitions not included in the JSON are not changed. If you specify only topics using a topics array, the partition reassignment tool reassigns all the partitions belonging to the specified topics. Example reassignment JSON file structure for reassigning all partitions for a topic 15.1.3. Reassigning partitions between JBOD volumes When using JBOD storage in your Kafka cluster, you can reassign the partitions between specific volumes and their log directories (each volume has a single log directory). To reassign a partition to a specific volume, add log_dirs values for each partition in the reassignment JSON file. Each log_dirs array contains the same number of entries as the replicas array, since each replica should be assigned to a specific log directory. The log_dirs array contains either an absolute path to a log directory or the special value any . The any value indicates that Kafka can choose any available log directory for that replica, which can be useful when reassigning partitions between JBOD volumes. Example reassignment JSON file structure with log directories 15.1.4. Throttling partition reassignment Partition reassignment can be a slow process because it involves transferring large amounts of data between brokers. To avoid a detrimental impact on clients, you can throttle the reassignment process. Use the --throttle parameter with the kafka-reassign-partitions.sh tool to throttle a reassignment. You specify a maximum threshold in bytes per second for the movement of partitions between brokers. For example, --throttle 5000000 sets a maximum threshold for moving partitions of 50 MBps. Throttling might cause the reassignment to take longer to complete. If the throttle is too low, the newly assigned brokers will not be able to keep up with records being published and the reassignment will never complete. If the throttle is too high, clients will be impacted. For example, for producers, this could manifest as higher than normal latency waiting for acknowledgment. For consumers, this could manifest as a drop in throughput caused by higher latency between polls. 15.2. Reassigning partitions after adding brokers Use a reassignment file generated by the kafka-reassign-partitions.sh tool to reassign partitions after increasing the number of brokers in a Kafka cluster. The reassignment file should describe how partitions are reassigned to brokers in the enlarged Kafka cluster. You apply the reassignment specified in the file to the brokers and then verify the new partition assignments. This procedure describes a secure scaling process that uses TLS. You'll need a Kafka cluster that uses TLS encryption and mTLS authentication. Note Though you can use the kafka-reassign-partitions.sh tool, Cruise Control is recommended for automated partition reassignments and cluster rebalancing . Cruise Control can move topics from one broker to another without any downtime, and it is the most efficient way to reassign partitions. Prerequisites An existing Kafka cluster. A new machine with the additional AMQ broker installed . You have created a JSON file to specify how partitions should be reassigned to brokers in the enlarged cluster. In this procedure, we are reassigning all partitions for a topic called my-topic . A JSON file named topics.json specifies the topic, and is used to generate a reassignment.json file. Example JSON file specifies my-topic { "version": 1, "topics": [ { "topic": "my-topic"} ] } Procedure Create a configuration file for the new broker using the same settings as for the other brokers in your cluster, except for broker.id , which should be a number that is not already used by any of the other brokers. Start the new Kafka broker passing the configuration file you created in the step as the argument to the kafka-server-start.sh script: su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka Repeat the above steps for each new broker. If you haven't done so, generate a reassignment JSON file named reassignment.json using the kafka-reassign-partitions.sh tool. Example command to generate the reassignment JSON file /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --topics-to-move-json-file topics.json \ 1 --broker-list 0,1,2,3,4 \ 2 --generate 1 The JSON file that specifies the topic. 2 Brokers IDs in the kafka cluster to include in the operation. This assumes broker 4 has been added. Example reassignment JSON file showing the current and proposed replica assignment Current partition replica assignment {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[2,3,0],"log_dirs":["any","any","any"]}]} Proposed partition reassignment configuration {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[0,1,2,3],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[1,2,3,4],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[2,3,4,0],"log_dirs":["any","any","any","any"]}]} Save a copy of this file locally in case you need to revert the changes later on. Run the partition reassignment using the --execute option. /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --execute If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --throttle 5000000 \ --execute Verify that the reassignment has completed using the --verify option. /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --verify The reassignment has finished when the --verify command reports that each of the partitions being moved has completed successfully. This final --verify will also have the effect of removing any reassignment throttles. 15.3. Reassigning partitions before removing brokers Use a reassignment file generated by the kafka-reassign-partitions.sh tool to reassign partitions before decreasing the number of brokers in a Kafka cluster. The reassignment file must describe how partitions are reassigned to the remaining brokers in the Kafka cluster. You apply the reassignment specified in the file to the brokers and then verify the new partition assignments. Brokers in the highest numbered pods are removed first. This procedure describes a secure scaling process that uses TLS. You'll need a Kafka cluster that uses TLS encryption and mTLS authentication. Note Though you can use the kafka-reassign-partitions.sh tool, Cruise Control is recommended for automated partition reassignments and cluster rebalancing . Cruise Control can move topics from one broker to another without any downtime, and it is the most efficient way to reassign partitions. Prerequisites An existing Kafka cluster. You have created a JSON file to specify how partitions should be reassigned to brokers in the reduced cluster. In this procedure, we are reassigning all partitions for a topic called my-topic . A JSON file named topics.json specifies the topic, and is used to generate a reassignment.json file. Example JSON file specifies my-topic { "version": 1, "topics": [ { "topic": "my-topic"} ] } Procedure If you haven't done so, generate a reassignment JSON file named reassignment.json using the kafka-reassign-partitions.sh tool. Example command to generate the reassignment JSON file /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --topics-to-move-json-file topics.json \ 1 --broker-list 0,1,2,3 \ 2 --generate 1 The JSON file that specifies the topic. 2 Brokers IDs in the kafka cluster to include in the operation. This assumes broker 4 has been removed. Example reassignment JSON file showing the current and proposed replica assignment Current partition replica assignment {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[3,4,2,0],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[0,2,3,1],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[1,3,0,4],"log_dirs":["any","any","any","any"]}]} Proposed partition reassignment configuration {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[2,3,0],"log_dirs":["any","any","any"]}]} Save a copy of this file locally in case you need to revert the changes later on. Run the partition reassignment using the --execute option. /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --execute If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --throttle 5000000 \ --execute Verify that the reassignment has completed using the --verify option. /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --verify The reassignment has finished when the --verify command reports that each of the partitions being moved has completed successfully. This final --verify will also have the effect of removing any reassignment throttles. Check that each broker being removed does not have any live partitions in its log ( log.dirs ). ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-deleteUSD' If a log directory does not match the regular expression \.[a-z0-9]-deleteUSD , active partitions are still present. If you have active partitions, check the reassignment has finished or the configuration in the reassignment JSON file. You can run the reassignment again. Make sure that there are no active partitions before moving on to the step. Stop the broker. su - kafka /opt/kafka/bin/kafka-server-stop.sh Confirm that the Kafka broker has stopped. jcmd | grep kafka 15.4. Changing the replication factor of topics Use the kafka-reassign-partitions.sh tool to change the replication factor of topics in a Kafka cluster. This can be done using a reassignment file to describe how the topic replicas should be changed. Prerequisites An existing Kafka cluster. You have created a JSON file to specify the topics to include in the operation. In this procedure, a topic called my-topic has 4 replicas and we want to reduce it to 3. A JSON file named topics.json specifies the topic, and is used to generate a reassignment.json file. Example JSON file specifies my-topic { "version": 1, "topics": [ { "topic": "my-topic"} ] } Procedure If you haven't done so, generate a reassignment JSON file named reassignment.json using the kafka-reassign-partitions.sh tool. Example command to generate the reassignment JSON file /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --topics-to-move-json-file topics.json \ 1 --broker-list 0,1,2,3,4 \ 2 --generate 1 The JSON file that specifies the topic. 2 Brokers IDs in the kafka cluster to include in the operation. Example reassignment JSON file showing the current and proposed replica assignment Current partition replica assignment {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[3,4,2,0],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[0,2,3,1],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[1,3,0,4],"log_dirs":["any","any","any","any"]}]} Proposed partition reassignment configuration {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[0,1,2,3],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[1,2,3,4],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[2,3,4,0],"log_dirs":["any","any","any","any"]}]} Save a copy of this file locally in case you need to revert the changes later on. Edit the reassignment.json to remove a replica from each partition. For example use jq to remove the last replica in the list for each partition of the topic: Removing the last topic replica for each partition jq '.partitions[].replicas |= del(.[-1])' reassignment.json > reassignment.json Example reassignment file showing the updated replicas {"version":1,"partitions":[{"topic":"my-topic","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":1,"replicas":[1,2,3],"log_dirs":["any","any","any","any"]},{"topic":"my-topic","partition":2,"replicas":[2,3,4],"log_dirs":["any","any","any","any"]}]} Make the topic replica change using the --execute option. /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --execute Note Removing replicas from a broker does not require any inter-broker data movement, so there is no need to throttle replication. If you are adding replicas, then you may want to change the throttle rate. Verify that the change to the topic replicas has completed using the --verify option. /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --reassignment-json-file reassignment.json \ --verify The reassignment has finished when the --verify command reports that each of the partitions being moved has completed successfully. This final --verify will also have the effect of removing any reassignment throttles. Run the bin/kafka-topics.sh command with the --describe option to see the results of the change to the topics. /opt/kafka/bin/kafka-reassign-partitions.sh \ --bootstrap-server localhost:9092 \ --describe Results of reducing the number of replicas for a topic my-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 my-topic Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 1,2,3 my-topic Partition: 2 Leader: 3 Replicas: 2,3,4 Isr: 2,3,4
|
[
"{ \"version\": 1, 1 \"partitions\": [ 2 { \"topic\": \"example-topic-1\", 3 \"partition\": 0, 4 \"replicas\": [1, 2, 3] 5 }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] } ] }",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }",
"{ \"version\": 1, \"partitions\": [ { \"topic\": \"example-topic-1\", \"partition\": 0, \"replicas\": [1, 2, 3] \"log_dirs\": [\"/var/lib/kafka/data-0/kafka-log1\", \"any\", \"/var/lib/kafka/data-1/kafka-log2\"] }, { \"topic\": \"example-topic-1\", \"partition\": 1, \"replicas\": [2, 3, 4] \"log_dirs\": [\"any\", \"/var/lib/kafka/data-2/kafka-log3\", \"/var/lib/kafka/data-3/kafka-log4\"] }, { \"topic\": \"example-topic-2\", \"partition\": 0, \"replicas\": [3, 4, 5] \"log_dirs\": [\"/var/lib/kafka/data-4/kafka-log5\", \"any\", \"/var/lib/kafka/data-5/kafka-log6\"] } ] }",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }",
"su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"jcmd | grep Kafka",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics.json \\ 1 --broker-list 0,1,2,3,4 \\ 2 --generate",
"Current partition replica assignment {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2],\"log_dirs\":[\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,0],\"log_dirs\":[\"any\",\"any\",\"any\"]}]} Proposed partition reassignment configuration {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --execute",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --throttle 5000000 --execute",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --verify",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics.json \\ 1 --broker-list 0,1,2,3 \\ 2 --generate",
"Current partition replica assignment {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[3,4,2,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[0,2,3,1],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[1,3,0,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]} Proposed partition reassignment configuration {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2],\"log_dirs\":[\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,0],\"log_dirs\":[\"any\",\"any\",\"any\"]}]}",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --execute",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --throttle 5000000 --execute",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --verify",
"ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'",
"su - kafka /opt/kafka/bin/kafka-server-stop.sh",
"jcmd | grep kafka",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"my-topic\"} ] }",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics.json \\ 1 --broker-list 0,1,2,3,4 \\ 2 --generate",
"Current partition replica assignment {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[3,4,2,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[0,2,3,1],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[1,3,0,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]} Proposed partition reassignment configuration {\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4,0],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}",
"jq '.partitions[].replicas |= del(.[-1])' reassignment.json > reassignment.json",
"{\"version\":1,\"partitions\":[{\"topic\":\"my-topic\",\"partition\":0,\"replicas\":[0,1,2],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":1,\"replicas\":[1,2,3],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]},{\"topic\":\"my-topic\",\"partition\":2,\"replicas\":[2,3,4],\"log_dirs\":[\"any\",\"any\",\"any\",\"any\"]}]}",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --execute",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file reassignment.json --verify",
"/opt/kafka/bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --describe",
"my-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 my-topic Partition: 1 Leader: 2 Replicas: 1,2,3 Isr: 1,2,3 my-topic Partition: 2 Leader: 3 Replicas: 2,3,4 Isr: 2,3,4"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/assembly-reassign-tool-str
|
Chapter 6. Managing metrics
|
Chapter 6. Managing metrics You can collect metrics to monitor how cluster components and your own workloads are performing. 6.1. Understanding metrics In OpenShift Dedicated, cluster components are monitored by scraping metrics exposed through service endpoints. You can also configure metrics collection for user-defined projects. Metrics enable you to monitor how cluster components and your own workloads are performing. You can define the metrics that you want to provide for your own workloads by using Prometheus client libraries at the application level. In OpenShift Dedicated, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can list all available metrics for a service by running a curl query against http://<endpoint>/metrics . For instance, you can expose a route to the prometheus-example-app example application and then run the following to view all of its available metrics: USD curl http://<example_app_endpoint>/metrics Example output # HELP http_requests_total Count of all HTTP requests # TYPE http_requests_total counter http_requests_total{code="200",method="get"} 4 http_requests_total{code="404",method="get"} 2 # HELP version Version information about this binary # TYPE version gauge version{version="v0.1.0"} 1 Additional resources Prometheus client library documentation 6.2. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 6.2.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 6.2.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure OpenShift Dedicated monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the dedicated-admin role or the monitoring-edit role. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a new YAML configuration file named example-app-service-monitor.yaml . Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app 1 Specify a user-defined namespace where your service runs. 2 Specify endpoint ports to be scraped by Prometheus. 3 Configure a selector to match your service based on its metadata labels. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. Verify that the ServiceMonitor resource is running: USD oc -n <namespace> get servicemonitor Example output NAME AGE prometheus-example-monitor 81m 6.2.3. Example service endpoint authentication settings You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs). The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. 6.2.3.1. Sample YAML authentication with a bearer token The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace: Example bearer token secret apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1 1 Specify an authentication token. The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth : Example bearer token authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the authentication token in the specified Secret object. 2 The name of the Secret object that contains the authentication credentials. Important Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected. 6.2.3.2. Sample YAML for Basic authentication The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace: Example Basic authentication secret apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2 1 Specify a username for authentication. 2 Specify a password for authentication. The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth : Example Basic authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the username in the specified Secret object. 2 4 The name of the Secret object that contains the Basic authentication. 3 The key that contains the password in the specified Secret object. 6.2.3.3. Sample YAML authentication with OAuth 2.0 The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace: Example OAuth 2.0 secret apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 1 Specify an Oauth 2.0 ID. 2 Specify an Oauth 2.0 secret. The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2 : Example OAuth 2.0 authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the OAuth 2.0 ID in the specified Secret object. 2 4 The name of the Secret object that contains the OAuth 2.0 credentials. 3 The key that contains the OAuth 2.0 secret in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . Additional resources How to scrape metrics using TLS in a ServiceMonitor configuration in a user-defined project 6.3. Querying metrics for all projects with the OpenShift Dedicated web console You can use the OpenShift Dedicated metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a dedicated-admin or as a user with view permissions for all projects, you can access metrics for all default OpenShift Dedicated and user-defined projects in the Metrics UI. Note Only dedicated administrators have access to the third-party UIs provided with OpenShift Dedicated monitoring. The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries. Prerequisites You have access to the cluster as a user with the dedicated-admin role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective of the OpenShift Dedicated web console, click Observe and go to the Metrics tab. To add one or more queries, perform any of the following actions: Option Description Select an existing query. From the Select query drop-down list, select an existing query. Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Click Add query . Duplicate an existing query. Click the options menu to the query, then choose Duplicate query . Disable a query from being run. Click the options menu to the query and choose Disable query . To run queries that you created, click Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. By default, the query table shows an expanded view that lists every metric and its current value. Click the Λ
down arrowhead to minimize the expanded view for a query. Optional: Save the page URL to use this set of queries again in the future. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions: Option Description Hide all metrics from a query. Click the options menu for the query and click Hide all series . Hide a specific metric. Go to the query table and click the colored square near the metric name. Zoom into the plot and change the time range. Perform one of the following actions: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu to select the time range. Reset the time range. Click Reset zoom . Display outputs for all queries at a specific point in time. Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box. Hide the plot. Click Hide graph . Additional resources Prometheus query documentation 6.4. Querying metrics for user-defined projects with the OpenShift Dedicated web console You can use the OpenShift Dedicated metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring. As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet. These queries are restricted to the selected project. You can also run custom Prometheus Query Language (PromQL) queries for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with OpenShift Dedicated monitoring. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure In the Developer perspective of the OpenShift Dedicated web console, click Observe and go to the Metrics tab. Select the project that you want to view metrics for from the Project: list. To add one or more queries, perform any of the following actions: Option Description Select an existing query. From the Select query drop-down list, select an existing query. Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Click Add query . Duplicate an existing query. Click the options menu to the query, then choose Duplicate query . Disable a query from being run. Click the options menu to the query and choose Disable query . To run queries that you created, click Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. By default, the query table shows an expanded view that lists every metric and its current value. Click the Λ
down arrowhead to minimize the expanded view for a query. Optional: Save the page URL to use this set of queries again in the future. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions: Option Description Hide all metrics from a query. Click the options menu for the query and click Hide all series . Hide a specific metric. Go to the query table and click the colored square near the metric name. Zoom into the plot and change the time range. Perform one of the following actions: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu to select the time range. Reset the time range. Click Reset zoom . Display outputs for all queries at a specific point in time. Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box. Hide the plot. Click Hide graph . Additional resources Prometheus query documentation 6.5. Getting detailed information about a metrics target You can use the OpenShift Dedicated web console to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. For example, you can view the current status of targeted endpoints to see when OpenShift Dedicated monitoring is not able to scrape metrics from a targeted component. The Metrics targets page shows targets for user-defined projects. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Procedure In the Administrator perspective of the OpenShift Dedicated web console, go to Observe Targets . The Metrics targets page opens with a list of all service endpoint targets that are being scraped for metrics. This page shows details about targets for default OpenShift Dedicated and user-defined projects. This page lists the following information for each target: Service endpoint URL being scraped The ServiceMonitor resource being monitored The up or down status of the target Namespace Last scrape time Duration of the last scrape Optional: To find a specific target, perform any of the following actions: Option Description Filter the targets by status and source. Choose filters in the Filter list. The following filtering options are available: Status filters: Up . The target is currently up and being actively scraped for metrics. Down . The target is currently down and not being scraped for metrics. Source filters: Platform . Platform-level targets relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality. User . User targets relate to user-defined projects. These projects are user-created and can be customized. Search for a target by name or label. Enter a search term in the Text or Label field to the search box. Sort the targets. Click one or more of the Endpoint Status , Namespace , Last Scrape , and Scrape Duration column headers. Click the URL in the Endpoint column for a target to go to its Target details page. This page provides information about the target, including the following information: The endpoint URL being scraped for metrics The current Up or Down status of the target A link to the namespace A link to the ServiceMonitor resource details Labels attached to the target The most recent time that the target was scraped for metrics
|
[
"curl http://<example_app_endpoint>/metrics",
"HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n <namespace> get servicemonitor",
"NAME AGE prometheus-example-monitor 81m",
"apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app"
] |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/monitoring/managing-metrics
|
Chapter 11. Open Container Initiative support
|
Chapter 11. Open Container Initiative support Container registries were originally designed to support container images in the Docker image format. To promote the use of additional runtimes apart from Docker, the Open Container Initiative (OCI) was created to provide a standardization surrounding container runtimes and image formats. Most container registries support the OCI standardization as it is based on the Docker image manifest V2, Schema 2 format. In addition to container images, a variety of artifacts have emerged that support not just individual applications, but also the Kubernetes platform as a whole. These range from Open Policy Agent (OPA) policies for security and governance to Helm charts and Operators that aid in application deployment. Quay.io is a private container registry that not only stores container images, but also supports an entire ecosystem of tooling to aid in the management of containers. Quay.io strives to be as compatible as possible with the OCI 1.1 Image and Distribution specifications , and supports common media types like Helm charts (as long as they pushed with a version of Helm that supports OCI) and a variety of arbitrary media types within the manifest or layer components of container images. Support for OCI media types differs from iterations of Quay.io, when the registry was more strict about accepted media types. Because Quay.io now works with a wider array of media types, including those that were previously outside the scope of its support, it is now more versatile accommodating not only standard container image formats but also emerging or unconventional types. In addition to its expanded support for novel media types, Quay.io ensures compatibility with Docker images, including V2_2 and V2_1 formats. This compatibility with Docker V2_2 and V2_1 images demonstrates Quay.io's commitment to providing a seamless experience for Docker users. Moreover, Quay.io continues to extend its support for Docker V1 pulls, catering to users who might still rely on this earlier version of Docker images. Support for OCI artifacts are enabled by default. The following examples show you how to use some some media types, which can be used as examples for using other OCI media types. 11.1. Helm and OCI prerequisites Helm simplifies how applications are packaged and deployed. Helm uses a packaging format called Charts which contain the Kubernetes resources representing an application. Quay.io supports Helm charts so long as they are a version supported by OCI. Use the following procedures to pre-configure your system to use Helm and other OCI media types. The most recent version of Helm can be downloaded from the Helm releases page. 11.2. Using Helm charts Use the following example to download and push an etherpad chart from the Red Hat Community of Practice (CoP) repository. Prerequisites You have logged into Quay.io. Procedure Add a chart repository by entering the following command: USD helm repo add redhat-cop https://redhat-cop.github.io/helm-charts Enter the following command to update the information of available charts locally from the chart repository: USD helm repo update Enter the following command to pull a chart from a repository: USD helm pull redhat-cop/etherpad --version=0.0.4 --untar Enter the following command to package the chart into a chart archive: USD helm package ./etherpad Example output Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz Log in to Quay.io using helm registry login : USD helm registry login quay.io Push the chart to your repository using the helm push command: helm push etherpad-0.0.4.tgz oci://quay.io/<organization_name>/helm Example output: Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b Ensure that the push worked by deleting the local copy, and then pulling the chart from the repository: USD rm -rf etherpad-0.0.4.tgz USD helm pull oci://quay.io/<organization_name>/helm/etherpad --version 0.0.4 Example output: Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902 11.3. Cosign OCI support Cosign is a tool that can be used to sign and verify container images. It uses the ECDSA-P256 signature algorithm and Red Hat's Simple Signing payload format to create public keys that are stored in PKIX files. Private keys are stored as encrypted PEM files. Cosign currently supports the following: Hardware and KMS Signing Bring-your-own PKI OIDC PKI Built-in binary transparency and timestamping service Use the following procedure to directly install Cosign. Prerequisites You have installed Go version 1.16 or later. Procedure Enter the following go command to directly install Cosign: USD go install github.com/sigstore/cosign/cmd/[email protected] Example output go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0 Generate a key-value pair for Cosign by entering the following command: USD cosign generate-key-pair Example output Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub Sign the key-value pair by entering the following command: USD cosign sign -key cosign.key quay.io/user1/busybox:test Example output Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig If you experience the error: signing quay-server.example.com/user1/busybox:test: getting remote image: GET https://quay-server.example.com/v2/user1/busybox/manifests/test : UNAUTHORIZED: access to the requested resource is not authorized; map[] error, which occurs because Cosign relies on ~./docker/config.json for authorization, you might need to execute the following command: USD podman login --authfile ~/.docker/config.json quay.io Example output Username: Password: Login Succeeded! Enter the following command to see the updated authorization configuration: USD cat ~/.docker/config.json { "auths": { "quay-server.example.com": { "auth": "cXVheWFkbWluOnBhc3N3b3Jk" } } 11.4. Installing and using Cosign Use the following procedure to directly install Cosign. Prerequisites You have installed Go version 1.16 or later. You have set FEATURE_GENERAL_OCI_SUPPORT to true in your config.yaml file. Procedure Enter the following go command to directly install Cosign: USD go install github.com/sigstore/cosign/cmd/[email protected] Example output go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0 Generate a key-value pair for Cosign by entering the following command: USD cosign generate-key-pair Example output Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub Sign the key-value pair by entering the following command: USD cosign sign -key cosign.key quay.io/user1/busybox:test Example output Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig If you experience the error: signing quay-server.example.com/user1/busybox:test: getting remote image: GET https://quay-server.example.com/v2/user1/busybox/manifests/test : UNAUTHORIZED: access to the requested resource is not authorized; map[] error, which occurs because Cosign relies on ~./docker/config.json for authorization, you might need to execute the following command: USD podman login --authfile ~/.docker/config.json quay.io Example output Username: Password: Login Succeeded! Enter the following command to see the updated authorization configuration: USD cat ~/.docker/config.json { "auths": { "quay-server.example.com": { "auth": "cXVheWFkbWluOnBhc3N3b3Jk" } }
|
[
"helm repo add redhat-cop https://redhat-cop.github.io/helm-charts",
"helm repo update",
"helm pull redhat-cop/etherpad --version=0.0.4 --untar",
"helm package ./etherpad",
"Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz",
"helm registry login quay.io",
"helm push etherpad-0.0.4.tgz oci://quay.io/<organization_name>/helm",
"Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b",
"rm -rf etherpad-0.0.4.tgz",
"helm pull oci://quay.io/<organization_name>/helm/etherpad --version 0.0.4",
"Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902",
"go install github.com/sigstore/cosign/cmd/[email protected]",
"go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0",
"cosign generate-key-pair",
"Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub",
"cosign sign -key cosign.key quay.io/user1/busybox:test",
"Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig",
"podman login --authfile ~/.docker/config.json quay.io",
"Username: Password: Login Succeeded!",
"cat ~/.docker/config.json { \"auths\": { \"quay-server.example.com\": { \"auth\": \"cXVheWFkbWluOnBhc3N3b3Jk\" } }",
"go install github.com/sigstore/cosign/cmd/[email protected]",
"go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0",
"cosign generate-key-pair",
"Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub",
"cosign sign -key cosign.key quay.io/user1/busybox:test",
"Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig",
"podman login --authfile ~/.docker/config.json quay.io",
"Username: Password: Login Succeeded!",
"cat ~/.docker/config.json { \"auths\": { \"quay-server.example.com\": { \"auth\": \"cXVheWFkbWluOnBhc3N3b3Jk\" } }"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/about_quay_io/oci-intro
|
Chapter 6. MachineConfigPool [machineconfiguration.openshift.io/v1]
|
Chapter 6. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigPoolSpec is the spec for MachineConfigPool resource. status object MachineConfigPoolStatus is the status for MachineConfigPool resource. 6.1.1. .spec Description MachineConfigPoolSpec is the spec for MachineConfigPool resource. Type object Property Type Description configuration object The targeted MachineConfig object for the machine config pool. machineConfigSelector object machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. maxUnavailable integer-or-string maxUnavailable defines either an integer number or percentage of nodes in the corresponding pool that can go Unavailable during an update. This includes nodes Unavailable for any reason, including user initiated cordons, failing nodes, etc. The default value is 1. A value larger than 1 will mean multiple nodes going unavailable during the update, which may affect your workload stress on the remaining nodes. You cannot set this value to 0 to stop updates (it will default back to 1); to stop updates, use the 'paused' property instead. Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards, even if maxUnavailable is greater than one. nodeSelector object nodeSelector specifies a label selector for Machines paused boolean paused specifies whether or not changes to this machine config pool should be stopped. This includes generating new desiredMachineConfig and update of machines. 6.1.2. .spec.configuration Description The targeted MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.3. .spec.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 6.1.4. .spec.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.5. .spec.machineConfigSelector Description machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.6. .spec.machineConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.7. .spec.machineConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.8. .spec.nodeSelector Description nodeSelector specifies a label selector for Machines Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.9. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.10. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.11. .status Description MachineConfigPoolStatus is the status for MachineConfigPool resource. Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object MachineConfigPoolCondition contains condition information for an MachineConfigPool. configuration object configuration represents the current MachineConfig object for the machine config pool. degradedMachineCount integer degradedMachineCount represents the total number of machines marked degraded (or unreconcilable). A node is marked degraded if applying a configuration failed.. machineCount integer machineCount represents the total number of machines in the machine config pool. observedGeneration integer observedGeneration represents the generation observed by the controller. readyMachineCount integer readyMachineCount represents the total number of ready machines targeted by the pool. unavailableMachineCount integer unavailableMachineCount represents the total number of unavailable (non-ready) machines targeted by the pool. A node is marked unavailable if it is in updating state or NodeReady condition is false. updatedMachineCount integer updatedMachineCount represents the total number of machines targeted by the pool that have the CurrentMachineConfig as their config. 6.1.12. .status.conditions Description conditions represents the latest available observations of current state. Type array 6.1.13. .status.conditions[] Description MachineConfigPoolCondition contains condition information for an MachineConfigPool. Type object Property Type Description lastTransitionTime `` lastTransitionTime is the timestamp corresponding to the last status change of this condition. message string message is a human readable description of the details of the last transition, complementing reason. reason string reason is a brief machine readable explanation for the condition's last transition. status string status of the condition, one of ('True', 'False', 'Unknown'). type string type of the condition, currently ('Done', 'Updating', 'Failed'). 6.1.14. .status.configuration Description configuration represents the current MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.15. .status.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 6.1.16. .status.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigpools DELETE : delete collection of MachineConfigPool GET : list objects of kind MachineConfigPool POST : create a MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} DELETE : delete a MachineConfigPool GET : read the specified MachineConfigPool PATCH : partially update the specified MachineConfigPool PUT : replace the specified MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status GET : read status of the specified MachineConfigPool PATCH : partially update status of the specified MachineConfigPool PUT : replace status of the specified MachineConfigPool 6.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigpools Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineConfigPool Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfigPool Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPoolList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfigPool Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 202 - Accepted MachineConfigPool schema 401 - Unauthorized Empty 6.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the MachineConfigPool Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineConfigPool Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfigPool Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfigPool Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfigPool Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty 6.2.3. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status Table 6.22. Global path parameters Parameter Type Description name string name of the MachineConfigPool Table 6.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified MachineConfigPool Table 6.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.25. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfigPool Table 6.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.27. Body parameters Parameter Type Description body Patch schema Table 6.28. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfigPool Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_apis/machineconfigpool-machineconfiguration-openshift-io-v1
|
function::task_tid
|
function::task_tid Name function::task_tid - The thread identifier of the task Synopsis Arguments task task_struct pointer Description This function returns the thread id of the given task.
|
[
"task_tid:long(task:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-tid
|
Chapter 11. Remotely accessing the desktop as a single user
|
Chapter 11. Remotely accessing the desktop as a single user You can remotely connect to the desktop on a RHEL server using graphical GNOME applications. Only a single user can connect to the desktop on the server at a given time. 11.1. Enabling desktop sharing on the server using GNOME This procedure configures a RHEL server to enable a remote desktop connection from a single client. Prerequisites The GNOME Remote Desktop service is installed: Procedure Configure a firewall rule to enable VNC access to the server: Reload firewall rules: Open Settings in GNOME. Navigate to the Sharing menu: Click Screen Sharing . The screen sharing configuration opens: Click the switch button in the window header to enable screen sharing: Select the Allow connections to control the screen check box. Under Access Options , select the Require a password option. Set a password in the Password field. Remote clients must enter this password when connecting to the desktop on the server. 11.2. Connecting to a shared desktop using GNOME This procedure connects to a remote desktop session using the Connections application. It connects to the graphical session of the user that is currently logged in on the server. Prerequisites A user is logged into the GNOME graphical session on the server. The desktop sharing is enabled on the server. Procedure Install the Connections application on the client: Launch the Connections application. Click the + button to open a new connection. Enter the IP address of the server. Choose the connection type based on the operating system you want to connect to. Click Connect . Verification On the client, check that you can see the shared server desktop. On the server, a screen sharing indicator appears on the right side of the top panel: You can control the screen sharing in the system menu. 11.3. Disabling encryption in GNOME VNC You can disable encryption in the GNOME remote desktop solution. This enables VNC clients that do not support the encryption to connect to the server. Procedure As the server user, set the encryption key of org.gnome.desktop.remote-desktop.vnc GSettings schema to ['none'] . Optional: Red Hat recommends that you tunnel the VNC connection over SSH to your VNC port. As a result, the SSH tunnel keeps the connection encrypted. For example: On the client, configure the port forwarding. Connect to the VNC session on the localhost:5901 address.
|
[
"dnf install gnome-remote-desktop",
"firewall-cmd --permanent --add-service=vnc-server success",
"firewall-cmd --reload success",
"dnf install gnome-connections",
"gsettings set org.gnome.desktop.remote-desktop.vnc encryption \"['none']\"",
"ssh -N -T -L 5901: server-ip-address :5901 user@server-ip-address"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/remotely-accessing-the-desktop-as-a-single-user_getting-started-with-the-gnome-desktop-environment
|
Chapter 12. Functions development reference guide
|
Chapter 12. Functions development reference guide 12.1. Developing Go functions Important OpenShift Serverless Functions with Go is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After you have created a Go function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 12.1.1. Prerequisites Before you can develop functions, you must complete the steps in Configuring OpenShift Serverless Functions . 12.1.2. Go function template structure When you create a Go function using the Knative ( kn ) CLI, the project directory looks like a typical Go project. The only exception is the additional func.yaml configuration file, which is used for specifying the image. Go functions have few restrictions. The only requirements are that your project must be defined in a function module, and must export the function Handle() . Both http and event trigger functions have the same template structure: Template structure fn βββ README.md βββ func.yaml 1 βββ go.mod 2 βββ go.sum βββ handle.go βββ handle_test.go 1 The func.yaml configuration file is used to determine the image name and registry. 2 You can add any required dependencies to the go.mod file, which can include additional local Go files. When the project is built for deployment, these dependencies are included in the resulting runtime container image. Example of adding dependencies USD go get gopkg.in/[email protected] 12.1.3. About invoking Go functions When using the Knative ( kn ) CLI to create a function project, you can generate a project that responds to CloudEvents, or one that responds to simple HTTP requests. Go functions are invoked by using different methods, depending on whether they are triggered by an HTTP request or a CloudEvent. 12.1.3.1. Functions triggered by an HTTP request When an incoming HTTP request is received, functions are invoked with a standard Go Context as the first parameter, followed by the http.ResponseWriter and http.Request parameters. You can use standard Go techniques to access the request, and set a corresponding HTTP response for your function. Example HTTP response func Handle(ctx context.Context, res http.ResponseWriter, req *http.Request) { // Read body body, err := ioutil.ReadAll(req.Body) defer req.Body.Close() if err != nil { http.Error(res, err.Error(), 500) return } // Process body and function logic // ... } 12.1.3.2. Functions triggered by a cloud event When an incoming cloud event is received, the event is invoked by the CloudEvents Go SDK . The invocation uses the Event type as a parameter. You can leverage the Go Context as an optional parameter in the function contract, as shown in the list of supported function signatures: Supported function signatures Handle() Handle() error Handle(context.Context) Handle(context.Context) error Handle(cloudevents.Event) Handle(cloudevents.Event) error Handle(context.Context, cloudevents.Event) Handle(context.Context, cloudevents.Event) error Handle(cloudevents.Event) *cloudevents.Event Handle(cloudevents.Event) (*cloudevents.Event, error) Handle(context.Context, cloudevents.Event) *cloudevents.Event Handle(context.Context, cloudevents.Event) (*cloudevents.Event, error) 12.1.3.2.1. CloudEvent trigger example A cloud event is received which contains a JSON string in the data property: { "customerId": "0123456", "productId": "6543210" } To access this data, a structure must be defined which maps properties in the cloud event data, and retrieves the data from the incoming event. The following example uses the Purchase structure: type Purchase struct { CustomerId string `json:"customerId"` ProductId string `json:"productId"` } func Handle(ctx context.Context, event cloudevents.Event) (err error) { purchase := &Purchase{} if err = event.DataAs(purchase); err != nil { fmt.Fprintf(os.Stderr, "failed to parse incoming CloudEvent %s\n", err) return } // ... } Alternatively, a Go encoding/json package could be used to access the cloud event directly as JSON in the form of a bytes array: func Handle(ctx context.Context, event cloudevents.Event) { bytes, err := json.Marshal(event) // ... } 12.1.4. Go function return values Functions triggered by HTTP requests can set the response directly. You can configure the function to do this by using the Go http.ResponseWriter . Example HTTP response func Handle(ctx context.Context, res http.ResponseWriter, req *http.Request) { // Set response res.Header().Add("Content-Type", "text/plain") res.Header().Add("Content-Length", "3") res.WriteHeader(200) _, err := fmt.Fprintf(res, "OK\n") if err != nil { fmt.Fprintf(os.Stderr, "error or response write: %v", err) } } Functions triggered by a cloud event might return nothing, error , or CloudEvent in order to push events into the Knative Eventing system. In this case, you must set a unique ID , proper Source , and a Type for the cloud event. The data can be populated from a defined structure, or from a map . Example CloudEvent response func Handle(ctx context.Context, event cloudevents.Event) (resp *cloudevents.Event, err error) { // ... response := cloudevents.NewEvent() response.SetID("example-uuid-32943bac6fea") response.SetSource("purchase/getter") response.SetType("purchase") // Set the data from Purchase type response.SetData(cloudevents.ApplicationJSON, Purchase{ CustomerId: custId, ProductId: prodId, }) // OR set the data directly from map response.SetData(cloudevents.ApplicationJSON, map[string]string{"customerId": custId, "productId": prodId}) // Validate the response resp = &response if err = resp.Validate(); err != nil { fmt.Printf("invalid event created. %v", err) } return } 12.1.5. Testing Go functions Go functions can be tested locally on your computer. In the default project that is created when you create a function using kn func create , there is a handle_test.go file, which contains some basic tests. These tests can be extended as needed. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure Navigate to the test folder for your function. Run the tests: USD go test 12.1.6. steps Build and deploy a function. 12.2. Developing Quarkus functions After you have created a Quarkus function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 12.2.1. Prerequisites Before you can develop functions, you must complete the setup steps in Configuring OpenShift Serverless Functions . 12.2.2. Quarkus function template structure When you create a Quarkus function by using the Knative ( kn ) CLI, the project directory looks similar to a typical Maven project. Additionally, the project contains the func.yaml file, which is used for configuring the function. Both http and event trigger functions have the same template structure: Template structure . βββ func.yaml 1 βββ mvnw βββ mvnw.cmd βββ pom.xml 2 βββ README.md βββ src βββ main β βββ java β β βββ functions β β βββ Function.java 3 β β βββ Input.java β β βββ Output.java β βββ resources β βββ application.properties βββ test βββ java βββ functions 4 βββ FunctionTest.java βββ NativeFunctionIT.java 1 Used to determine the image name and registry. 2 The Project Object Model (POM) file contains project configuration, such as information about dependencies. You can add additional dependencies by modifying this file. Example of additional dependencies ... <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.13</version> <scope>test</scope> </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> <version>3.8.0</version> <scope>test</scope> </dependency> </dependencies> ... Dependencies are downloaded during the first compilation. 3 The function project must contain a Java method annotated with @Funq . You can place this method in the Function.java class. 4 Contains simple test cases that can be used to test your function locally. 12.2.3. About invoking Quarkus functions You can create a Quarkus project that responds to cloud events, or one that responds to simple HTTP requests. Cloud events in Knative are transported over HTTP as a POST request, so either function type can listen and respond to incoming HTTP requests. When an incoming request is received, Quarkus functions are invoked with an instance of a permitted type. Table 12.1. Function invocation options Invocation method Data type contained in the instance Example of data HTTP POST request JSON object in the body of the request { "customerId": "0123456", "productId": "6543210" } HTTP GET request Data in the query string ?customerId=0123456&productId=6543210 CloudEvent JSON object in the data property { "customerId": "0123456", "productId": "6543210" } The following example shows a function that receives and processes the customerId and productId purchase data that is listed in the table: Example Quarkus function public class Functions { @Funq public void processPurchase(Purchase purchase) { // process the purchase } } The corresponding Purchase JavaBean class that contains the purchase data looks as follows: Example class public class Purchase { private long customerId; private long productId; // getters and setters } 12.2.3.1. Invocation examples The following example code defines three functions named withBeans , withCloudEvent , and withBinary ; Example import io.quarkus.funqy.Funq; import io.quarkus.funqy.knative.events.CloudEvent; public class Input { private String message; // getters and setters } public class Output { private String message; // getters and setters } public class Functions { @Funq public Output withBeans(Input in) { // function body } @Funq public CloudEvent<Output> withCloudEvent(CloudEvent<Input> in) { // function body } @Funq public void withBinary(byte[] in) { // function body } } The withBeans function of the Functions class can be invoked by: An HTTP POST request with a JSON body: USD curl "http://localhost:8080/withBeans" -X POST \ -H "Content-Type: application/json" \ -d '{"message": "Hello there."}' An HTTP GET request with query parameters: USD curl "http://localhost:8080/withBeans?message=Hello%20there." -X GET A CloudEvent object in binary encoding: USD curl "http://localhost:8080/" -X POST \ -H "Content-Type: application/json" \ -H "Ce-SpecVersion: 1.0" \ -H "Ce-Type: withBeans" \ -H "Ce-Source: cURL" \ -H "Ce-Id: 42" \ -d '{"message": "Hello there."}' A CloudEvent object in structured encoding: USD curl http://localhost:8080/ \ -H "Content-Type: application/cloudevents+json" \ -d '{ "data": {"message":"Hello there."}, "datacontenttype": "application/json", "id": "42", "source": "curl", "type": "withBeans", "specversion": "1.0"}' The withCloudEvent function of the Functions class can be invoked by using a CloudEvent object, similarly to the withBeans function. However, unlike withBeans , withCloudEvent cannot be invoked with a plain HTTP request. The withBinary function of the Functions class can be invoked by: A CloudEvent object in binary encoding: A CloudEvent object in structured encoding: 12.2.4. CloudEvent attributes If you need to read or write the attributes of a CloudEvent, such as type or subject , you can use the CloudEvent<T> generic interface and the CloudEventBuilder builder. The <T> type parameter must be one of the permitted types. In the following example, CloudEventBuilder is used to return success or failure of processing the purchase: public class Functions { private boolean _processPurchase(Purchase purchase) { // do stuff } public CloudEvent<Void> processPurchase(CloudEvent<Purchase> purchaseEvent) { System.out.println("subject is: " + purchaseEvent.subject()); if (!_processPurchase(purchaseEvent.data())) { return CloudEventBuilder.create() .type("purchase.error") .build(); } return CloudEventBuilder.create() .type("purchase.success") .build(); } } 12.2.5. Quarkus function return values Functions can return an instance of any type from the list of permitted types. Alternatively, they can return the Uni<T> type, where the <T> type parameter can be of any type from the permitted types. The Uni<T> type is useful if a function calls asynchronous APIs, because the returned object is serialized in the same format as the received object. For example: If a function receives an HTTP request, then the returned object is sent in the body of an HTTP response. If a function receives a CloudEvent object in binary encoding, then the returned object is sent in the data property of a binary-encoded CloudEvent object. The following example shows a function that fetches a list of purchases: Example command public class Functions { @Funq public List<Purchase> getPurchasesByName(String name) { // logic to retrieve purchases } } Invoking this function through an HTTP request produces an HTTP response that contains a list of purchases in the body of the response. Invoking this function through an incoming CloudEvent object produces a CloudEvent response with a list of purchases in the data property. 12.2.5.1. Permitted types The input and output of a function can be any of the void , String , or byte[] types. Additionally, they can be primitive types and their wrappers, for example, int and Integer . They can also be the following complex objects: Javabeans, maps, lists, arrays, and the special CloudEvents<T> type. Maps, lists, arrays, the <T> type parameter of the CloudEvents<T> type, and attributes of Javabeans can only be of types listed here. Example public class Functions { public List<Integer> getIds(); public Purchase[] getPurchasesByName(String name); public String getNameById(int id); public Map<String,Integer> getNameIdMapping(); public void processImage(byte[] img); } 12.2.6. Testing Quarkus functions Quarkus functions can be tested locally on your computer. In the default project that is created when you create a function using kn func create , there is the src/test/ directory, which contains basic Maven tests. These tests can be extended as needed. Prerequisites You have created a Quarkus function. You have installed the Knative ( kn ) CLI. Procedure Navigate to the project folder for your function. Run the Maven tests: USD ./mvnw test 12.2.7. Overriding liveness and readiness probe values You can override liveness and readiness probe values for your Quarkus functions. This allows you to configure health checks performed on the function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure Override the /health/liveness and /health/readiness paths with your own values. You can do this either by changing properties in the function source or by setting the QUARKUS_SMALLRYE_HEALTH_LIVENESS_PATH and QUARKUS_SMALLRYE_HEALTH_READINESS_PATH environment variables on func.yaml file. To override the paths using the function source, update the path properties in the src/main/resources/application.properties file: 1 The root path, which is automatically prepended to the liveness and readiness paths. 2 The liveness path, set to /health/alive here. 3 The readiness path, set to /health/ready here. To override the paths using environment variables, define the path variables in the build block of the func.yaml file: build: builder: s2i buildEnvs: - name: QUARKUS_SMALLRYE_HEALTH_LIVENESS_PATH value: alive 1 - name: QUARKUS_SMALLRYE_HEALTH_READINESS_PATH value: ready 2 1 The liveness path, set to /health/alive here. 2 The readiness path, set to /health/ready here. Add the new endpoints to the func.yaml file, so that they are properly bound to the container for the Knative service: deploy: healthEndpoints: liveness: /health/alive readiness: /health/ready 12.2.8. steps Build and deploy a function. 12.3. Developing Node.js functions After you have created a Node.js function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 12.3.1. Prerequisites Before you can develop functions, you must complete the steps in Configuring OpenShift Serverless Functions . 12.3.2. Node.js function template structure When you create a Node.js function using the Knative ( kn ) CLI, the project directory looks like a typical Node.js project. The only exception is the additional func.yaml file, which is used to configure the function. Both http and event trigger functions have the same template structure: Template structure . βββ func.yaml 1 βββ index.js 2 βββ package.json 3 βββ README.md βββ test 4 βββ integration.js βββ unit.js 1 The func.yaml configuration file is used to determine the image name and registry. 2 Your project must contain an index.js file which exports a single function. 3 You are not restricted to the dependencies provided in the template package.json file. You can add additional dependencies as you would in any other Node.js project. Example of adding npm dependencies npm install --save opossum When the project is built for deployment, these dependencies are included in the created runtime container image. 4 Integration and unit test scripts are provided as part of the function template. 12.3.3. About invoking Node.js functions When using the Knative ( kn ) CLI to create a function project, you can generate a project that responds to CloudEvents, or one that responds to simple HTTP requests. CloudEvents in Knative are transported over HTTP as a POST request, so both function types listen for and respond to incoming HTTP events. Node.js functions can be invoked with a simple HTTP request. When an incoming request is received, functions are invoked with a context object as the first parameter. 12.3.3.1. Node.js context objects Functions are invoked by providing a context object as the first parameter. This object provides access to the incoming HTTP request information. This information includes the HTTP request method, any query strings or headers sent with the request, the HTTP version, and the request body. Incoming requests that contain a CloudEvent attach the incoming instance of the CloudEvent to the context object so that it can be accessed by using context.cloudevent . 12.3.3.1.1. Context object methods The context object has a single method, cloudEventResponse() , that accepts a data value and returns a CloudEvent. In a Knative system, if a function deployed as a service is invoked by an event broker sending a CloudEvent, the broker examines the response. If the response is a CloudEvent, this event is handled by the broker. Example context object method // Expects to receive a CloudEvent with customer data function handle(context, customer) { // process the customer const processed = handle(customer); return context.cloudEventResponse(customer) .source('/handle') .type('fn.process.customer') .response(); } 12.3.3.1.2. CloudEvent data If the incoming request is a CloudEvent, any data associated with the CloudEvent is extracted from the event and provided as a second parameter. For example, if a CloudEvent is received that contains a JSON string in its data property that is similar to the following: { "customerId": "0123456", "productId": "6543210" } When invoked, the second parameter to the function, after the context object, will be a JavaScript object that has customerId and productId properties. Example signature function handle(context, data) The data parameter in this example is a JavaScript object that contains the customerId and productId properties. 12.3.3.1.3. Arbitrary data A function can receive any data, not just CloudEvents . For example, you might want to call a function by using POST with an arbitrary object in the body: { "id": "12345", "contact": { "title": "Mr.", "firstname": "John", "lastname": "Smith" } } In this case, you can define the function as follows: function handle(context, customer) { return "Hello " + customer.contact.title + " " + customer.contact.lastname; } Supplying the contact object to the function would then return the following output: Hello Mr. Smith 12.3.3.1.4. Supported data types CloudEvents can contain various data types, including JSON, XML, plain text, and binary data. These data types are provided to the function in their respective formats: JSON Data : Provided as a JavaScript object. XML Data : Provided as an XML document. Plain Text : Provided as a string. Binary Data : Provided as a Buffer object. 12.3.3.1.5. Multiple data types in a function Ensure your function can handle different data types by checking the Content-Type header and parsing the data accordingly. For example: function handle(context, data) { if (context.headers['content-type'] === 'application/json') { // handle JSON data } else if (context.headers['content-type'] === 'application/xml') { // handle XML data } else { // handle other data types } } 12.3.4. Node.js function return values Functions can return any valid JavaScript type or can have no return value. When a function has no return value specified, and no failure is indicated, the caller receives a 204 No Content response. Functions can also return a CloudEvent or a Message object in order to push events into the Knative Eventing system. In this case, the developer is not required to understand or implement the CloudEvent messaging specification. Headers and other relevant information from the returned values are extracted and sent with the response. Example function handle(context, customer) { // process customer and return a new CloudEvent return new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) } 12.3.4.1. Returning primitive types Functions can return any valid JavaScript type, including primitives such as strings, numbers, and booleans: Example function returning a string function handle(context) { return "This function Works!" } Calling this function returns the following string: USD curl https://myfunction.example.com This function Works! Example function returning a number function handle(context) { let somenumber = 100 return { body: somenumber } } Calling this function returns the following number: USD curl https://myfunction.example.com 100 Example function returning a boolean function handle(context) { let someboolean = false return { body: someboolean } } Calling this function returns the following boolean: USD curl https://myfunction.example.com false Returning primitives directly without wrapping them in an object results in a 204 No Content status code with an empty body: Example function returning a primitive directly function handle(context) { let someboolean = false return someboolean } Calling this function returns the following: USD http :8080 HTTP/1.1 204 No Content Connection: keep-alive ... 12.3.4.2. Returning headers You can set a response header by adding a headers property to the return object. These headers are extracted and sent with the response to the caller. Example response header function handle(context, customer) { // process customer and return custom headers // the response will be '204 No content' return { headers: { customerid: customer.id } }; } 12.3.4.3. Returning status codes You can set a status code that is returned to the caller by adding a statusCode property to the return object: Example status code function handle(context, customer) { // process customer if (customer.restricted) { return { statusCode: 451 } } } Status codes can also be set for errors that are created and thrown by the function: Example error status code function handle(context, customer) { // process customer if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } } 12.3.5. Testing Node.js functions Node.js functions can be tested locally on your computer. In the default project that is created when you create a function by using kn func create , there is a test folder that contains some simple unit and integration tests. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure Navigate to the test folder for your function. Run the tests: USD npm test 12.3.6. Overriding liveness and readiness probe values You can override liveness and readiness probe values for your Node.js functions. This allows you to configure health checks performed on the function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure In your function code, create the Function object, which implements the following interface: export interface Function { init?: () => any; 1 shutdown?: () => any; 2 liveness?: HealthCheck; 3 readiness?: HealthCheck; 4 logLevel?: LogLevel; handle: CloudEventFunction | HTTPFunction; 5 } 1 The initialization function, called before the server is started. This function is optional and should be synchronous. 2 The shutdown function, called after the server is stopped. This function is optional and should be synchronous. 3 The liveness function, called to check if the server is alive. This function is optional and should return 200/OK if the server is alive. 4 The readiness function, called to check if the server is ready to accept requests. This function is optional and should return 200/OK if the server is ready. 5 The function to handle HTTP requests. For example, add the following code to the index.js file: const Function = { handle: (context, body) => { // The function logic goes here return 'function called' }, liveness: () => { process.stdout.write('In liveness\n'); return 'ok, alive'; }, 1 readiness: () => { process.stdout.write('In readiness\n'); return 'ok, ready'; } 2 }; Function.liveness.path = '/alive'; 3 Function.readiness.path = '/ready'; 4 module.exports = Function; 1 Custom liveness function. 2 Custom readiness function. 3 Custom liveness endpoint. 4 Custom readiness endpoint. As an alternative to Function.liveness.path and Function.readiness.path , you can specify custom endpoints using the LIVENESS_URL and READINESS_URL environment variables: run: envs: - name: LIVENESS_URL value: /alive 1 - name: READINESS_URL value: /ready 2 1 The liveness path, set to /alive here. 2 The readiness path, set to /ready here. Add the new endpoints to the func.yaml file, so that they are properly bound to the container for the Knative service: deploy: healthEndpoints: liveness: /alive readiness: /ready 12.3.7. Node.js context object reference The context object has several properties that can be accessed by the function developer. Accessing these properties can provide information about HTTP requests and write output to the cluster logs. 12.3.7.1. log Provides a logging object that can be used to write output to the cluster logs. The log adheres to the Pino logging API . Example log function handle(context) { context.log.info("Processing customer"); } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"Processing customer"} You can change the log level to one of fatal , error , warn , info , debug , trace , or silent . To do that, change the value of logLevel by assigning one of these values to the environment variable FUNC_LOG_LEVEL using the config command. 12.3.7.2. query Returns the query string for the request, if any, as key-value pairs. These attributes are also found on the context object itself. Example query function handle(context) { // Log the 'name' query parameter context.log.info(context.query.name); // Query parameters are also attached to the context context.log.info(context.name); } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.com?name=tiger' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"tiger"} 12.3.7.3. body Returns the request body if any. If the request body contains JSON code, this will be parsed so that the attributes are directly available. Example body function handle(context) { // log the incoming request body's 'hello' parameter context.log.info(context.body.hello); } You can access the function by using the curl command to invoke it: Example command USD kn func invoke -d '{"Hello": "world"}' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"world"} 12.3.7.4. headers Returns the HTTP request headers as an object. Example header function handle(context) { context.log.info(context.headers["custom-header"]); } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"some-value"} 12.3.7.5. HTTP requests method Returns the HTTP request method as a string. httpVersion Returns the HTTP version as a string. httpVersionMajor Returns the HTTP major version number as a string. httpVersionMinor Returns the HTTP minor version number as a string. 12.3.8. steps Build and deploy a function. 12.4. Developing TypeScript functions After you have created a TypeScript function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 12.4.1. Prerequisites Before you can develop functions, you must complete the steps in Configuring OpenShift Serverless Functions . 12.4.2. TypeScript function template structure When you create a TypeScript function using the Knative ( kn ) CLI, the project directory looks like a typical TypeScript project. The only exception is the additional func.yaml file, which is used for configuring the function. Both http and event trigger functions have the same template structure: Template structure . βββ func.yaml 1 βββ package.json 2 βββ package-lock.json βββ README.md βββ src β βββ index.ts 3 βββ test 4 β βββ integration.ts β βββ unit.ts βββ tsconfig.json 1 The func.yaml configuration file is used to determine the image name and registry. 2 You are not restricted to the dependencies provided in the template package.json file. You can add additional dependencies as you would in any other TypeScript project. Example of adding npm dependencies npm install --save opossum When the project is built for deployment, these dependencies are included in the created runtime container image. 3 Your project must contain an src/index.js file which exports a function named handle . 4 Integration and unit test scripts are provided as part of the function template. 12.4.3. About invoking TypeScript functions When using the Knative ( kn ) CLI to create a function project, you can generate a project that responds to CloudEvents or one that responds to simple HTTP requests. CloudEvents in Knative are transported over HTTP as a POST request, so both function types listen for and respond to incoming HTTP events. TypeScript functions can be invoked with a simple HTTP request. When an incoming request is received, functions are invoked with a context object as the first parameter. 12.4.3.1. TypeScript context objects To invoke a function, you provide a context object as the first parameter. Accessing properties of the context object can provide information about the incoming HTTP request. Example context object function handle(context:Context): string This information includes the HTTP request method, any query strings or headers sent with the request, the HTTP version, and the request body. Incoming requests that contain a CloudEvent attach the incoming instance of the CloudEvent to the context object so that it can be accessed by using context.cloudevent . 12.4.3.1.1. Context object methods The context object has a single method, cloudEventResponse() , that accepts a data value and returns a CloudEvent. In a Knative system, if a function deployed as a service is invoked by an event broker sending a CloudEvent, the broker examines the response. If the response is a CloudEvent, this event is handled by the broker. Example context object method // Expects to receive a CloudEvent with customer data export function handle(context: Context, cloudevent?: CloudEvent): CloudEvent { // process the customer const customer = cloudevent.data; const processed = processCustomer(customer); return context.cloudEventResponse(customer) .source('/customer/process') .type('customer.processed') .response(); } 12.4.3.1.2. Context types The TypeScript type definition files export the following types for use in your functions. Exported type definitions // Invokable is the expeted Function signature for user functions export interface Invokable { (context: Context, cloudevent?: CloudEvent): any } // Logger can be used for structural logging to the console export interface Logger { debug: (msg: any) => void, info: (msg: any) => void, warn: (msg: any) => void, error: (msg: any) => void, fatal: (msg: any) => void, trace: (msg: any) => void, } // Context represents the function invocation context, and provides // access to the event itself as well as raw HTTP objects. export interface Context { log: Logger; req: IncomingMessage; query?: Record<string, any>; body?: Record<string, any>|string; method: string; headers: IncomingHttpHeaders; httpVersion: string; httpVersionMajor: number; httpVersionMinor: number; cloudevent: CloudEvent; cloudEventResponse(data: string|object): CloudEventResponse; } // CloudEventResponse is a convenience class used to create // CloudEvents on function returns export interface CloudEventResponse { id(id: string): CloudEventResponse; source(source: string): CloudEventResponse; type(type: string): CloudEventResponse; version(version: string): CloudEventResponse; response(): CloudEvent; } 12.4.3.1.3. CloudEvent data If the incoming request is a CloudEvent, any data associated with the CloudEvent is extracted from the event and provided as a second parameter. For example, if a CloudEvent is received that contains a JSON string in its data property that is similar to the following: { "customerId": "0123456", "productId": "6543210" } When invoked, the second parameter to the function, after the context object, will be a JavaScript object that has customerId and productId properties. Example signature function handle(context: Context, cloudevent?: CloudEvent): CloudEvent The cloudevent parameter in this example is a JavaScript object that contains the customerId and productId properties. 12.4.4. TypeScript function return values Functions can return any valid JavaScript type or can have no return value. When a function has no return value specified, and no failure is indicated, the caller receives a 204 No Content response. Functions can also return a CloudEvent or a Message object in order to push events into the Knative Eventing system. In this case, the developer is not required to understand or implement the CloudEvent messaging specification. Headers and other relevant information from the returned values are extracted and sent with the response. Example export const handle: Invokable = function ( context: Context, cloudevent?: CloudEvent ): Message { // process customer and return a new CloudEvent const customer = cloudevent.data; return HTTP.binary( new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) ); }; 12.4.4.1. Returning headers You can set a response header by adding a headers property to the return object. These headers are extracted and sent with the response to the caller. Example response header export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer and return custom headers const customer = cloudevent.data as Record<string, any>; return { headers: { 'customer-id': customer.id } }; } 12.4.4.2. Returning status codes You can set a status code that is returned to the caller by adding a statusCode property to the return object: Example status code export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { return { statusCode: 451 } } // business logic, then return { statusCode: 240 } } Status codes can also be set for errors that are created and thrown by the function: Example error status code export function handle(context: Context, cloudevent?: CloudEvent): Record<string, string> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } } 12.4.5. Testing TypeScript functions TypeScript functions can be tested locally on your computer. In the default project that is created when you create a function using kn func create , there is a test folder that contains some simple unit and integration tests. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure If you have not previously run tests, install the dependencies first: USD npm install Navigate to the test folder for your function. Run the tests: USD npm test 12.4.6. Overriding liveness and readiness probe values You can override liveness and readiness probe values for your TypeScript functions. This allows you to configure health checks performed on the function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure In your function code, create the Function object, which implements the following interface: export interface Function { init?: () => any; 1 shutdown?: () => any; 2 liveness?: HealthCheck; 3 readiness?: HealthCheck; 4 logLevel?: LogLevel; handle: CloudEventFunction | HTTPFunction; 5 } 1 The initialization function, called before the server is started. This function is optional and should be synchronous. 2 The shutdown function, called after the server is stopped. This function is optional and should be synchronous. 3 The liveness function, called to check if the server is alive. This function is optional and should return 200/OK if the server is alive. 4 The readiness function, called to check if the server is ready to accept requests. This function is optional and should return 200/OK if the server is ready. 5 The function to handle HTTP requests. For example, add the following code to the index.js file: const Function = { handle: (context, body) => { // The function logic goes here return 'function called' }, liveness: () => { process.stdout.write('In liveness\n'); return 'ok, alive'; }, 1 readiness: () => { process.stdout.write('In readiness\n'); return 'ok, ready'; } 2 }; Function.liveness.path = '/alive'; 3 Function.readiness.path = '/ready'; 4 module.exports = Function; 1 Custom liveness function. 2 Custom readiness function. 3 Custom liveness endpoint. 4 Custom readiness endpoint. As an alternative to Function.liveness.path and Function.readiness.path , you can specify custom endpoints using the LIVENESS_URL and READINESS_URL environment variables: run: envs: - name: LIVENESS_URL value: /alive 1 - name: READINESS_URL value: /ready 2 1 The liveness path, set to /alive here. 2 The readiness path, set to /ready here. Add the new endpoints to the func.yaml file, so that they are properly bound to the container for the Knative service: deploy: healthEndpoints: liveness: /alive readiness: /ready 12.4.7. TypeScript context object reference The context object has several properties that can be accessed by the function developer. Accessing these properties can provide information about incoming HTTP requests and write output to the cluster logs. 12.4.7.1. log Provides a logging object that can be used to write output to the cluster logs. The log adheres to the Pino logging API . Example log export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"Processing customer"} You can change the log level to one of fatal , error , warn , info , debug , trace , or silent . To do that, change the value of logLevel by assigning one of these values to the environment variable FUNC_LOG_LEVEL using the config command. 12.4.7.2. query Returns the query string for the request, if any, as key-value pairs. These attributes are also found on the context object itself. Example query export function handle(context: Context): string { // log the 'name' query parameter if (context.query) { context.log.info((context.query as Record<string, string>).name); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' --data '{"name": "tiger"}' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"tiger"} {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"tiger"} 12.4.7.3. body Returns the request body, if any. If the request body contains JSON code, this will be parsed so that the attributes are directly available. Example body export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' --data '{"hello": "world"}' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"world"} 12.4.7.4. headers Returns the HTTP request headers as an object. Example header export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.headers as Record<string, string>)['custom-header']); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the curl command to invoke it: Example command USD curl -H'x-custom-header: some-value'' http://example.function.com Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"some-value"} 12.4.7.5. HTTP requests method Returns the HTTP request method as a string. httpVersion Returns the HTTP version as a string. httpVersionMajor Returns the HTTP major version number as a string. httpVersionMinor Returns the HTTP minor version number as a string. 12.4.8. steps Build and deploy a function. See the Pino API documentation for more information about logging with functions. 12.5. Developing Python functions Important OpenShift Serverless Functions with Python is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After you have created a Python function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 12.5.1. Prerequisites Before you can develop functions, you must complete the steps in Configuring OpenShift Serverless Functions . 12.5.2. Python function template structure When you create a Python function by using the Knative ( kn ) CLI, the project directory looks similar to a typical Python project. Python functions have very few restrictions. The only requirements are that your project contains a func.py file that contains a main() function, and a func.yaml configuration file. Developers are not restricted to the dependencies provided in the template requirements.txt file. Additional dependencies can be added as they would be in any other Python project. When the project is built for deployment, these dependencies will be included in the created runtime container image. Both http and event trigger functions have the same template structure: Template structure fn βββ func.py 1 βββ func.yaml 2 βββ requirements.txt 3 βββ test_func.py 4 1 Contains a main() function. 2 Used to determine the image name and registry. 3 Additional dependencies can be added to the requirements.txt file as they are in any other Python project. 4 Contains a simple unit test that can be used to test your function locally. 12.5.3. About invoking Python functions Python functions can be invoked with a simple HTTP request. When an incoming request is received, functions are invoked with a context object as the first parameter. The context object is a Python class with two attributes: The request attribute is always present, and contains the Flask request object. The second attribute, cloud_event , is populated if the incoming request is a CloudEvent object. Developers can access any CloudEvent data from the context object. Example context object def main(context: Context): """ The context parameter contains the Flask request object and any CloudEvent received with the request. """ print(f"Method: {context.request.method}") print(f"Event data {context.cloud_event.data}") # ... business logic here 12.5.4. Python function return values Functions can return any value supported by Flask . This is because the invocation framework proxies these values directly to the Flask server. Example def main(context: Context): body = { "message": "Howdy!" } headers = { "content-type": "application/json" } return body, 200, headers Functions can set both headers and response codes as secondary and tertiary response values from function invocation. 12.5.4.1. Returning CloudEvents Developers can use the @event decorator to tell the invoker that the function return value must be converted to a CloudEvent before sending the response. Example @event("event_source"="/my/function", "event_type"="my.type") def main(context): # business logic here data = do_something() # more data processing return data This example sends a CloudEvent as the response value, with a type of "my.type" and a source of "/my/function" . The CloudEvent data property is set to the returned data variable. The event_source and event_type decorator attributes are both optional. 12.5.5. Testing Python functions You can test Python functions locally on your computer. The default project contains a test_func.py file, which provides a simple unit test for functions. Note The default test framework for Python functions is unittest . You can use a different test framework if you prefer. Prerequisites To run Python functions tests locally, you must install the required dependencies: USD pip install -r requirements.txt Procedure Navigate to the folder for your function that contains the test_func.py file. Run the tests: USD python3 test_func.py 12.5.6. steps Build and deploy a function.
|
[
"fn βββ README.md βββ func.yaml 1 βββ go.mod 2 βββ go.sum βββ handle.go βββ handle_test.go",
"go get gopkg.in/[email protected]",
"func Handle(ctx context.Context, res http.ResponseWriter, req *http.Request) { // Read body body, err := ioutil.ReadAll(req.Body) defer req.Body.Close() if err != nil { http.Error(res, err.Error(), 500) return } // Process body and function logic // }",
"Handle() Handle() error Handle(context.Context) Handle(context.Context) error Handle(cloudevents.Event) Handle(cloudevents.Event) error Handle(context.Context, cloudevents.Event) Handle(context.Context, cloudevents.Event) error Handle(cloudevents.Event) *cloudevents.Event Handle(cloudevents.Event) (*cloudevents.Event, error) Handle(context.Context, cloudevents.Event) *cloudevents.Event Handle(context.Context, cloudevents.Event) (*cloudevents.Event, error)",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"type Purchase struct { CustomerId string `json:\"customerId\"` ProductId string `json:\"productId\"` } func Handle(ctx context.Context, event cloudevents.Event) (err error) { purchase := &Purchase{} if err = event.DataAs(purchase); err != nil { fmt.Fprintf(os.Stderr, \"failed to parse incoming CloudEvent %s\\n\", err) return } // }",
"func Handle(ctx context.Context, event cloudevents.Event) { bytes, err := json.Marshal(event) // }",
"func Handle(ctx context.Context, res http.ResponseWriter, req *http.Request) { // Set response res.Header().Add(\"Content-Type\", \"text/plain\") res.Header().Add(\"Content-Length\", \"3\") res.WriteHeader(200) _, err := fmt.Fprintf(res, \"OK\\n\") if err != nil { fmt.Fprintf(os.Stderr, \"error or response write: %v\", err) } }",
"func Handle(ctx context.Context, event cloudevents.Event) (resp *cloudevents.Event, err error) { // response := cloudevents.NewEvent() response.SetID(\"example-uuid-32943bac6fea\") response.SetSource(\"purchase/getter\") response.SetType(\"purchase\") // Set the data from Purchase type response.SetData(cloudevents.ApplicationJSON, Purchase{ CustomerId: custId, ProductId: prodId, }) // OR set the data directly from map response.SetData(cloudevents.ApplicationJSON, map[string]string{\"customerId\": custId, \"productId\": prodId}) // Validate the response resp = &response if err = resp.Validate(); err != nil { fmt.Printf(\"invalid event created. %v\", err) } return }",
"go test",
". βββ func.yaml 1 βββ mvnw βββ mvnw.cmd βββ pom.xml 2 βββ README.md βββ src βββ main β βββ java β β βββ functions β β βββ Function.java 3 β β βββ Input.java β β βββ Output.java β βββ resources β βββ application.properties βββ test βββ java βββ functions 4 βββ FunctionTest.java βββ NativeFunctionIT.java",
"<dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.13</version> <scope>test</scope> </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> <version>3.8.0</version> <scope>test</scope> </dependency> </dependencies>",
"public class Functions { @Funq public void processPurchase(Purchase purchase) { // process the purchase } }",
"public class Purchase { private long customerId; private long productId; // getters and setters }",
"import io.quarkus.funqy.Funq; import io.quarkus.funqy.knative.events.CloudEvent; public class Input { private String message; // getters and setters } public class Output { private String message; // getters and setters } public class Functions { @Funq public Output withBeans(Input in) { // function body } @Funq public CloudEvent<Output> withCloudEvent(CloudEvent<Input> in) { // function body } @Funq public void withBinary(byte[] in) { // function body } }",
"curl \"http://localhost:8080/withBeans\" -X POST -H \"Content-Type: application/json\" -d '{\"message\": \"Hello there.\"}'",
"curl \"http://localhost:8080/withBeans?message=Hello%20there.\" -X GET",
"curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/json\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBeans\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" -d '{\"message\": \"Hello there.\"}'",
"curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d '{ \"data\": {\"message\":\"Hello there.\"}, \"datacontenttype\": \"application/json\", \"id\": \"42\", \"source\": \"curl\", \"type\": \"withBeans\", \"specversion\": \"1.0\"}'",
"curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/octet-stream\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBinary\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" --data-binary '@img.jpg'",
"curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d \"{ \\\"data_base64\\\": \\\"USD(base64 --wrap=0 img.jpg)\\\", \\\"datacontenttype\\\": \\\"application/octet-stream\\\", \\\"id\\\": \\\"42\\\", \\\"source\\\": \\\"curl\\\", \\\"type\\\": \\\"withBinary\\\", \\\"specversion\\\": \\\"1.0\\\"}\"",
"public class Functions { private boolean _processPurchase(Purchase purchase) { // do stuff } public CloudEvent<Void> processPurchase(CloudEvent<Purchase> purchaseEvent) { System.out.println(\"subject is: \" + purchaseEvent.subject()); if (!_processPurchase(purchaseEvent.data())) { return CloudEventBuilder.create() .type(\"purchase.error\") .build(); } return CloudEventBuilder.create() .type(\"purchase.success\") .build(); } }",
"public class Functions { @Funq public List<Purchase> getPurchasesByName(String name) { // logic to retrieve purchases } }",
"public class Functions { public List<Integer> getIds(); public Purchase[] getPurchasesByName(String name); public String getNameById(int id); public Map<String,Integer> getNameIdMapping(); public void processImage(byte[] img); }",
"./mvnw test",
"quarkus.smallrye-health.root-path=/health 1 quarkus.smallrye-health.liveness-path=alive 2 quarkus.smallrye-health.readiness-path=ready 3",
"build: builder: s2i buildEnvs: - name: QUARKUS_SMALLRYE_HEALTH_LIVENESS_PATH value: alive 1 - name: QUARKUS_SMALLRYE_HEALTH_READINESS_PATH value: ready 2",
"deploy: healthEndpoints: liveness: /health/alive readiness: /health/ready",
". βββ func.yaml 1 βββ index.js 2 βββ package.json 3 βββ README.md βββ test 4 βββ integration.js βββ unit.js",
"npm install --save opossum",
"// Expects to receive a CloudEvent with customer data function handle(context, customer) { // process the customer const processed = handle(customer); return context.cloudEventResponse(customer) .source('/handle') .type('fn.process.customer') .response(); }",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"function handle(context, data)",
"{ \"id\": \"12345\", \"contact\": { \"title\": \"Mr.\", \"firstname\": \"John\", \"lastname\": \"Smith\" } }",
"function handle(context, customer) { return \"Hello \" + customer.contact.title + \" \" + customer.contact.lastname; }",
"Hello Mr. Smith",
"function handle(context, data) { if (context.headers['content-type'] === 'application/json') { // handle JSON data } else if (context.headers['content-type'] === 'application/xml') { // handle XML data } else { // handle other data types } }",
"function handle(context, customer) { // process customer and return a new CloudEvent return new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) }",
"function handle(context) { return \"This function Works!\" }",
"curl https://myfunction.example.com",
"This function Works!",
"function handle(context) { let somenumber = 100 return { body: somenumber } }",
"curl https://myfunction.example.com",
"100",
"function handle(context) { let someboolean = false return { body: someboolean } }",
"curl https://myfunction.example.com",
"false",
"function handle(context) { let someboolean = false return someboolean }",
"http :8080",
"HTTP/1.1 204 No Content Connection: keep-alive",
"function handle(context, customer) { // process customer and return custom headers // the response will be '204 No content' return { headers: { customerid: customer.id } }; }",
"function handle(context, customer) { // process customer if (customer.restricted) { return { statusCode: 451 } } }",
"function handle(context, customer) { // process customer if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }",
"npm test",
"export interface Function { init?: () => any; 1 shutdown?: () => any; 2 liveness?: HealthCheck; 3 readiness?: HealthCheck; 4 logLevel?: LogLevel; handle: CloudEventFunction | HTTPFunction; 5 }",
"const Function = { handle: (context, body) => { // The function logic goes here return 'function called' }, liveness: () => { process.stdout.write('In liveness\\n'); return 'ok, alive'; }, 1 readiness: () => { process.stdout.write('In readiness\\n'); return 'ok, ready'; } 2 }; Function.liveness.path = '/alive'; 3 Function.readiness.path = '/ready'; 4 module.exports = Function;",
"run: envs: - name: LIVENESS_URL value: /alive 1 - name: READINESS_URL value: /ready 2",
"deploy: healthEndpoints: liveness: /alive readiness: /ready",
"function handle(context) { context.log.info(\"Processing customer\"); }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}",
"function handle(context) { // Log the 'name' query parameter context.log.info(context.query.name); // Query parameters are also attached to the context context.log.info(context.name); }",
"kn func invoke --target 'http://example.com?name=tiger'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}",
"function handle(context) { // log the incoming request body's 'hello' parameter context.log.info(context.body.hello); }",
"kn func invoke -d '{\"Hello\": \"world\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}",
"function handle(context) { context.log.info(context.headers[\"custom-header\"]); }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}",
". βββ func.yaml 1 βββ package.json 2 βββ package-lock.json βββ README.md βββ src β βββ index.ts 3 βββ test 4 β βββ integration.ts β βββ unit.ts βββ tsconfig.json",
"npm install --save opossum",
"function handle(context:Context): string",
"// Expects to receive a CloudEvent with customer data export function handle(context: Context, cloudevent?: CloudEvent): CloudEvent { // process the customer const customer = cloudevent.data; const processed = processCustomer(customer); return context.cloudEventResponse(customer) .source('/customer/process') .type('customer.processed') .response(); }",
"// Invokable is the expeted Function signature for user functions export interface Invokable { (context: Context, cloudevent?: CloudEvent): any } // Logger can be used for structural logging to the console export interface Logger { debug: (msg: any) => void, info: (msg: any) => void, warn: (msg: any) => void, error: (msg: any) => void, fatal: (msg: any) => void, trace: (msg: any) => void, } // Context represents the function invocation context, and provides // access to the event itself as well as raw HTTP objects. export interface Context { log: Logger; req: IncomingMessage; query?: Record<string, any>; body?: Record<string, any>|string; method: string; headers: IncomingHttpHeaders; httpVersion: string; httpVersionMajor: number; httpVersionMinor: number; cloudevent: CloudEvent; cloudEventResponse(data: string|object): CloudEventResponse; } // CloudEventResponse is a convenience class used to create // CloudEvents on function returns export interface CloudEventResponse { id(id: string): CloudEventResponse; source(source: string): CloudEventResponse; type(type: string): CloudEventResponse; version(version: string): CloudEventResponse; response(): CloudEvent; }",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"function handle(context: Context, cloudevent?: CloudEvent): CloudEvent",
"export const handle: Invokable = function ( context: Context, cloudevent?: CloudEvent ): Message { // process customer and return a new CloudEvent const customer = cloudevent.data; return HTTP.binary( new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) ); };",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer and return custom headers const customer = cloudevent.data as Record<string, any>; return { headers: { 'customer-id': customer.id } }; }",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { return { statusCode: 451 } } // business logic, then return { statusCode: 240 } }",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, string> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }",
"npm install",
"npm test",
"export interface Function { init?: () => any; 1 shutdown?: () => any; 2 liveness?: HealthCheck; 3 readiness?: HealthCheck; 4 logLevel?: LogLevel; handle: CloudEventFunction | HTTPFunction; 5 }",
"const Function = { handle: (context, body) => { // The function logic goes here return 'function called' }, liveness: () => { process.stdout.write('In liveness\\n'); return 'ok, alive'; }, 1 readiness: () => { process.stdout.write('In readiness\\n'); return 'ok, ready'; } 2 }; Function.liveness.path = '/alive'; 3 Function.readiness.path = '/ready'; 4 module.exports = Function;",
"run: envs: - name: LIVENESS_URL value: /alive 1 - name: READINESS_URL value: /ready 2",
"deploy: healthEndpoints: liveness: /alive readiness: /ready",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}",
"export function handle(context: Context): string { // log the 'name' query parameter if (context.query) { context.log.info((context.query as Record<string, string>).name); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com' --data '{\"name\": \"tiger\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"} {\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com' --data '{\"hello\": \"world\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.headers as Record<string, string>)['custom-header']); } else { context.log.info('No data received'); } return 'OK'; }",
"curl -H'x-custom-header: some-value'' http://example.function.com",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}",
"fn βββ func.py 1 βββ func.yaml 2 βββ requirements.txt 3 βββ test_func.py 4",
"def main(context: Context): \"\"\" The context parameter contains the Flask request object and any CloudEvent received with the request. \"\"\" print(f\"Method: {context.request.method}\") print(f\"Event data {context.cloud_event.data}\") # ... business logic here",
"def main(context: Context): body = { \"message\": \"Howdy!\" } headers = { \"content-type\": \"application/json\" } return body, 200, headers",
"@event(\"event_source\"=\"/my/function\", \"event_type\"=\"my.type\") def main(context): # business logic here data = do_something() # more data processing return data",
"pip install -r requirements.txt",
"python3 test_func.py"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/functions/functions-development-reference-guide
|
Chapter 1. RBAC APIs
|
Chapter 1. RBAC APIs 1.1. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object 1.2. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 1.3. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object 1.4. Role [rbac.authorization.k8s.io/v1] Description Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. Type object
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/rbac_apis/rbac-apis
|
Chapter 3. Alternative provisioning network methods
|
Chapter 3. Alternative provisioning network methods This section contains information about other methods that you can use to configure the provisioning network to accommodate routed spine-leaf with composable networks. 3.1. VLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VLAN tunnel across the L3 topology. For more information, see Figure 3.1, "VLAN provisioning network topology" . If you use a VLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, trunk a VLAN between the Top-of-Rack (ToR) leaf switches. In the following diagram, the StorageLeaf networks are presented to the Ceph storage and Compute nodes; the NetworkLeaf represents an example of any network that you want to compose. Figure 3.1. VLAN provisioning network topology 3.2. VXLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VXLAN tunnel to span across the layer 3 topology. For more information, see Figure 3.2, "VXLAN provisioning network topology" . If you use a VXLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, configure VXLAN endpoints on the Top-of-Rack (ToR) leaf switches. Figure 3.2. VXLAN provisioning network topology
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_spine-leaf_networking/assembly_alternative-provisioning-network-methods
|
Chapter 15. Performing basic overcloud administration tasks
|
Chapter 15. Performing basic overcloud administration tasks This chapter contains information about basic tasks you might need to perform during the lifecycle of your overcloud. 15.1. Accessing overcloud nodes through SSH You can access each overcloud node through the SSH protocol. Each overcloud node contains a tripleo-admin user. The stack user on the undercloud has key-based SSH access to the tripleo-admin user on each overcloud node. All overcloud nodes have a short hostname that the undercloud resolves to an IP address on the control plane network. Each short hostname uses a .ctlplane suffix. For example, the short name for overcloud-controller-0 is overcloud-controller-0.ctlplane Prerequisites A deployed overcloud with a working control plane network. Procedure Log in to the undercloud as the stack user. Find the name of the node that you want to access: Connect to the node as the tripleo-admin user: 15.2. Managing containerized services Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services. Listing containers and images To list running containers, run the following command: To include stopped or failed containers in the command output, add the --all option to the command: To list container images, run the following command: Inspecting container properties To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command: Managing containers with Systemd services versions of OpenStack Platform managed containers with Docker and its daemon. Now, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container. Note It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead. To check a container status, run the systemctl status command: To stop a container, run the systemctl stop command: To start a container, run the systemctl start command: To restart a container, run the systemctl restart command: Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations: Clean exit code or signal, such as running podman stop command. Unclean exit code, such as the podman container crashing after a start. Unclean signals. Timeout if the container takes more than 1m 30s to start. For more information about Systemd services, see the systemd.service documentation . Note Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/ . For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system of the node, which overwrites any the changes that were made within the container before the restart. Monitoring podman containers with Systemd timers The Systemd timers interface manages container health checks. Each container has a timer that runs a service unit that executes health check scripts. To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the output to lines containing tripleo : To check the status of a specific container timer, run the systemctl status command for the healthcheck service: To stop, start, restart, and show the status of a container timer, run the relevant systemctl command against the .timer Systemd resource. For example, to check the status of the tripleo_keystone_healthcheck.timer resource, run the following command: If the healthcheck service is disabled but the timer for that service is present and enabled, it means that the check is currently timed out, but will be run according to timer. You can also start the check manually. Note The podman ps command does not show the container health status. Checking container logs Red Hat OpenStack Platform 17.0 logs all standard output (stdout) from all containers, and standard errors (stderr) consolidated inone single file for each container in /var/log/containers/stdout . The host also applies log rotation to this directory, which prevents huge files and disk space issues. In case a container is replaced, the new container outputs to the same log file, because podman uses the container name instead of container ID. You can also check the logs for a containerized service with the podman logs command. For example, to view the logs for the keystone container, run the following command: Accessing containers To enter the shell for a containerized service, use the podman exec command to launch /bin/bash . For example, to enter the shell for the keystone container, run the following command: To enter the shell for the keystone container as the root user, run the following command: To exit the container, run the following command: 15.3. Modifying the overcloud environment You can modify the overcloud to add additional features or alter existing operations. Procedure To modify the overcloud, make modifications to your custom environment files and heat templates, then rerun the openstack overcloud deploy command from your initial overcloud creation. For example, if you created an overcloud using Section 11.3, "Configuring and deploying the overcloud" , rerun the following command: Director checks the overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. Director does not recreate the overcloud, but rather changes the existing overcloud. Important Removing parameters from custom environment files does not revert the parameter value to the default configuration. You must identify the default value from the core heat template collection in /usr/share/openstack-tripleo-heat-templates and set the value in your custom environment file manually. If you want to include a new environment file, add it to the openstack overcloud deploy command with the`-e` option. For example: This command includes the new parameters and resources from the environment file into the stack. Important It is not advisable to make manual modifications to the overcloud configuration because director might overwrite these modifications later. 15.4. Importing virtual machines into the overcloud You can migrate virtual machines from an existing OpenStack environment to your Red Hat OpenStack Platform (RHOSP) environment. Procedure On the existing OpenStack environment, create a new image by taking a snapshot of a running server and download the image: Replace <instance_name> with the name of the instance. Replace <image_name> with the name of the new image. Replace <exported_vm.qcow2> with the name of the exported virtual machine. Copy the exported image to the undercloud node: Log in to the undercloud as the stack user. Source the overcloudrc credentials file: Upload the exported image into the overcloud: Launch a new instance: Important You can use these commands to copy each virtual machine disk from the existing OpenStack environment to the new Red Hat OpenStack Platform. QCOW snapshots lose their original layering system. 15.5. Launching the ephemeral heat process In versions of Red Hat OpenStack Platform (RHOSP) a system-installed Heat process was used to install the overcloud. Now, we use ephermal Heat to install the overcloud meaning that the heat-api and heat-engine processes are started on demand by the deployment , update , and upgrade commands. Previously, you used the openstack stack command to create and manage stacks. This command is no longer available by default. For troubleshooting and debugging purposes, for example if the stack should fail, you must first launch the ephemeral Heat process to use the openstack stack commands. Use the openstack overcloud tripleo launch heat command to enable ephemeral heat outside of a deployment. Procedure Use the openstack tripleo launch heat command to launch the ephemeral Heat process: The command exits after launching the Heat process, the Heat process continues to run in the background as a podman pod. Use the podman pod ps command to verify that the ephemeral-heat process is running: Use the export command to export the OS_CLOUD environment: Use the openstack stack list command to list the installed stacks: You can debug with commands such as openstack stack environment show and openstack stack resource list . After you have finished debugging, stop the emphemeral Heat process: Note Sometimes, exporting the heat environment fails. This can happen when other credentials, such as overcloudrc , are in use. In this case unset the existing environment and source the heat environment. 15.6. Running the dynamic inventory script You can run Ansible-based automation in your Red Hat OpenStack Platform (RHOSP) environment. Use the tripleo-ansible-inventory.yaml inventory file located in the /home/stack/overcloud-deploy/<stack> directory to run ansible plays or ad-hoc commands. Note If you want to run an Ansible playbook or an Ansible ad-hoc command on the undercloud, you must use the /home/stack/tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml inventory file. Procedure To view your inventory of nodes, run the following Ansible ad-hoc command: To execute Ansible playbooks on your environment, run the ansible command and include the full path to inventory file using the -i option. For example: Replace <hosts> with the type of hosts that you want to use to use: controller for all Controller nodes compute for all Compute nodes overcloud for all overcloud child nodes. For example, controller and compute nodes "*" for all nodes Replace <options> with additional Ansible options. Use the --ssh-extra-args='-o StrictHostKeyChecking=no' option to bypass confirmation on host key checking. Use the -u [USER] option to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using the ansible_ssh_user parameter in the dynamic inventory. The -u option overrides this parameter. Use the -m [MODULE] option to use a specific Ansible module. The default is command , which executes Linux commands. Use the -a [MODULE_ARGS] option to define arguments for the chosen module. Important Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes. 15.7. Removing an overcloud stack You can delete an overcloud stack and unprovision all the stack nodes. Note Deleting your overcloud stack does not erase all the overcloud data. If you need to erase all the overcloud data, contact Red Hat support. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Retrieve a list of all the nodes in your stack and their current status: Delete the overcloud stack and unprovision the nodes and networks: Replace <node_definition_file> with the name of your node definition file, for example, overcloud-baremetal-deploy.yaml . Replace <networks_definition_file> with the name of your networks definition file, for example, network_data_v2.yaml . Replace <stack> with the name of the stack that you want to delete. If not specified, the default stack is overcloud . Confirm that you want to delete the overcloud: Wait for the overcloud to delete and the nodes and networks to unprovision. Confirm that the bare-metal nodes have been unprovisioned: Remove the stack directories: Note The directory paths for your stack might be different from the default if you used the --output-dir and --working-dir options when deploying the overcloud with the openstack overcloud deploy command.
|
[
"(undercloud)USD metalsmith list",
"(undercloud)USD ssh tripleo-admin@overcloud-controller-0",
"sudo podman ps",
"sudo podman ps --all",
"sudo podman images",
"sudo podman inspect keystone",
"sudo systemctl status tripleo_keystone β tripleo_keystone.service - keystone container Loaded: loaded (/etc/systemd/system/tripleo_keystone.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-02-15 23:53:18 UTC; 2 days ago Main PID: 29012 (podman) CGroup: /system.slice/tripleo_keystone.service ββ29012 /usr/bin/podman start -a keystone",
"sudo systemctl stop tripleo_keystone",
"sudo systemctl start tripleo_keystone",
"sudo systemctl restart tripleo_keystone",
"sudo systemctl list-timers | grep tripleo Mon 2019-02-18 20:18:30 UTC 1s left Mon 2019-02-18 20:17:26 UTC 1min 2s ago tripleo_nova_metadata_healthcheck.timer tripleo_nova_metadata_healthcheck.service Mon 2019-02-18 20:18:34 UTC 5s left Mon 2019-02-18 20:17:23 UTC 1min 5s ago tripleo_keystone_healthcheck.timer tripleo_keystone_healthcheck.service Mon 2019-02-18 20:18:35 UTC 6s left Mon 2019-02-18 20:17:13 UTC 1min 15s ago tripleo_memcached_healthcheck.timer tripleo_memcached_healthcheck.service (...)",
"sudo systemctl status tripleo_keystone_healthcheck.service β tripleo_keystone_healthcheck.service - keystone healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.service; disabled; vendor preset: disabled) Active: inactive (dead) since Mon 2019-02-18 20:22:46 UTC; 22s ago Process: 115581 ExecStart=/usr/bin/podman exec keystone /openstack/healthcheck (code=exited, status=0/SUCCESS) Main PID: 115581 (code=exited, status=0/SUCCESS) Feb 18 20:22:46 undercloud.localdomain systemd[1]: Starting keystone healthcheck Feb 18 20:22:46 undercloud.localdomain podman[115581]: {\"versions\": {\"values\": [{\"status\": \"stable\", \"updated\": \"2019-01-22T00:00:00Z\", \"...\"}]}]}} Feb 18 20:22:46 undercloud.localdomain podman[115581]: 300 192.168.24.1:35357 0.012 seconds Feb 18 20:22:46 undercloud.localdomain systemd[1]: Started keystone healthcheck.",
"sudo systemctl status tripleo_keystone_healthcheck.timer β tripleo_keystone_healthcheck.timer - keystone container healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled) Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago",
"sudo podman logs keystone",
"sudo podman exec -it keystone /bin/bash",
"sudo podman exec --user 0 -it <NAME OR ID> /bin/bash",
"exit",
"source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/overcloud-baremetal-deployed.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org",
"source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/new-environment.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/overcloud-baremetal-deployed.yaml --ntp-server pool.ntp.org",
"openstack server image create --name <image_name> <instance_name> openstack image save --file <exported_vm.qcow2> <image_name>",
"scp exported_vm.qcow2 [email protected]:~/.",
"source ~/overcloudrc",
"(overcloud) USD openstack image create --disk-format qcow2 -file <exported_vm.qcow2> --container-format bare <image_name>",
"(overcloud) USD openstack server create --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id <instance_name>",
"(undercloud)USD openstack tripleo launch heat --heat-dir /home/stack/overcloud-deploy/overcloud/heat-launcher --restore-db",
"(undercloud)USD sudo podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS 958b141609b2 ephemeral-heat Running 2 minutes ago 44447995dbcf 3",
"(undercloud)USD export OS_CLOUD=heat",
"(undercloud)USD openstack stack list +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | ID | Stack Name | Project | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | 761e2a54-c6f9-4e0f-abe6-c8e0ad51a76c | overcloud | admin | CREATE_COMPLETE | 2022-08-29T20:48:37Z | None | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+",
"(undercloud)USD openstack tripleo launch heat --kill",
"(overcloud)USD unset OS_CLOUD (overcloud)USD unset OS_PROJECT_NAME (overcloud)USD unset OS_PROJECT_DOMAIN_NAME (overcloud)USD unset OS_USER_DOMAIN_NAME (overcloud)USD OS_AUTH_TYPE=none (overcloud)USD OS_ENDPOINT=http://127.0.0.1:8006/v1/admin (overcloud)USD export OS_CLOUD=heat",
"(undercloud) [stack@undercloud ~]USD ansible -i ./overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml all --list",
"(undercloud) USD ansible <hosts> -i ./overcloud-deploy/tripleo-ansible-inventory.yaml <playbook> <options>",
"source ~/stackrc",
"(undercloud)USD openstack baremetal node list +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+ | 92ae71b0-3c31-4ebb-b467-6b5f6b0caac7 | compute-0 | 059fb1a1-53ea-4060-9a47-09813de28ea1 | power on | active | False | | 9d6f955e-3d98-4d1a-9611-468761cebabf | compute-1 | e73a4b50-9579-4fe1-bd1a-556a2c8b504f | power on | active | False | | 8a686fc1-1381-4238-9bf3-3fb16eaec6ab | controller-0 | 6d69e48d-10b4-45dd-9776-155a9b8ad575 | power on | active | False | | eb8083cc-5f8f-405f-9b0c-14b772ce4534 | controller-1 | 1f836ac0-a70d-4025-88a3-bbe0583b4b8e | power on | active | False | | a6750f1f-8901-41d6-b9f1-f5d6a10a76c7 | controller-2 | e2edd028-cea6-4a98-955e-5c392d91ed46 | power on | active | False | +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+",
"(undercloud)USD openstack overcloud delete -b <node_definition_file> --networks-file <networks_definition_file> --network-ports <stack>",
"Are you sure you want to delete this overcloud [y/N]?",
"(undercloud) [stack@undercloud-0 ~]USD openstack baremetal node list +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+ | 92ae71b0-3c31-4ebb-b467-6b5f6b0caac7 | compute-0 | None | power off | available | False | | 9d6f955e-3d98-4d1a-9611-468761cebabf | compute-1 | None | power off | available | False | | 8a686fc1-1381-4238-9bf3-3fb16eaec6ab | controller-0 | None | power off | available | False | | eb8083cc-5f8f-405f-9b0c-14b772ce4534 | controller-1 | None | power off | available | False | | a6750f1f-8901-41d6-b9f1-f5d6a10a76c7 | controller-2 | None | power off | available | False | +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+",
"rm -rf ~/overcloud-deploy/<stack> rm -rf ~/config-download/<stack>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_performing-basic-overcloud-administration-tasks
|
Chapter 7. User Access settings in the Red Hat Hybrid Cloud Console
|
Chapter 7. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. 7.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 7.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (β) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 7.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (β) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. 7.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (β) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. Additional resources For more information about user access and permissions, see User Access Configuration Guide for Role-based Access Control (RBAC) .
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/getting_started_with_red_hat_insights/insights-rbac
|
1.2. How Do You Perform a Kickstart Installation?
|
1.2. How Do You Perform a Kickstart Installation? Kickstart installations can be performed using a local CD-ROM, a local hard drive, or via NFS, FTP, or HTTP. To use kickstart, you must: Create a kickstart file. Create a boot media with the kickstart file or make the kickstart file available on the network. Make the installation tree available. Start the kickstart installation. This chapter explains these steps in detail.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kickstart_installations-how_do_you_perform_a_kickstart_installation
|
Chapter 2. Automating Network Intrusion Detection and Prevention Systems (IDPS) with Ansible
|
Chapter 2. Automating Network Intrusion Detection and Prevention Systems (IDPS) with Ansible You can use Ansible to automate your Intrusion Detection and Prevention System (IDPS). For the purpose of this guide, we use Snort as the IDPS. Use Ansible automation hub to consume content collections, such as tasks, roles, and modules to create automated workflows. 2.1. Requirements and prerequisites Before you begin automating your IDPS with Ansible, ensure that you have the proper installations and configurations necessary to successfully manage your IDPS. You have installed Ansible-core 2.15 or later. SSH connection and keys are configured. IDPS software (Snort) is installed and configured. You have access to the IDPS server (Snort) to enforce new policies. 2.1.1. Verifying your IDPS installation To verify that Snort has been configured successfully, call it via sudo and ask for the version: USD sudo snort --version ,,_ -*> Snort! <*- o" )~ Version 2.9.13 GRE (Build 15013) "" By Martin Roesch & The Snort Team: http://www.snort.org/contact#team Copyright (C) 2014-2019 Cisco and/or its affiliates.
|
[
"sudo snort --version ,,_ -*> Snort! <*- o\" )~ Version 2.9.13 GRE (Build 15013) \"\" By Martin Roesch & The Snort Team: http://www.snort.org/contact#team Copyright (C) 2014-2019 Cisco and/or its affiliates. All rights reserved. Copyright (C) 1998-2013 Sourcefire, Inc., et al. Using libpcap version 1.5.3 Using PCRE version: 8.32 2012-11-30 Using ZLIB version: 1.2.7",
"sudo systemctl status snort β snort.service - Snort service Loaded: loaded (/etc/systemd/system/snort.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-08-26 17:06:10 UTC; 1s ago Main PID: 17217 (snort) CGroup: /system.slice/snort.service ββ17217 /usr/sbin/snort -u root -g root -c /etc/snort/snort.conf -i eth0 -p -R 1 --pid-path=/var/run/snort --no-interface-pidfile --nolock-pidfile [...]",
"ansible-galaxy install ansible_security.ids_rule",
"- name: Add Snort rule hosts: snort",
"- name: Add Snort rule hosts: snort become: true",
"- name: Add Snort rule hosts: snort become: true vars: ids_provider: snort",
"- name: Add Snort rule hosts: snort become: true vars: ids_provider: snort tasks: - name: Add snort password attack rule include_role: name: \"ansible_security.ids_rule\" vars: ids_rule: 'alert tcp any any -> any any (msg:\"Attempted /etc/passwd Attack\"; uricontent:\"/etc/passwd\"; classtype:attempted-user; sid:99000004; priority:1; rev:1;)' ids_rules_file: '/etc/snort/rules/local.rules' ids_rule_state: present",
"ansible-navigator run add_snort_rule.ym --mode stdout"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_security_automation_guide/assembly-idps_ansible-security
|
Jenkins
|
Jenkins OpenShift Container Platform 4.15 Jenkins Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/jenkins/index
|
Chapter 8. Clustering
|
Chapter 8. Clustering Dynamic Token Timeout for Corosync The token_coefficient option has been added to the Corosync Cluster Engine . The value of token_coefficient is used only when the nodelist section is specified and contains at least three nodes. In such a situation, the token timeout is computed as follows: This allows the cluster to scale without manually changing the token timeout every time a new node is added. The default value is 650 milliseconds, but it can be set to 0, resulting in effective removal of this feature. This feature allows Corosync to handle dynamic addition and removal of nodes. Corosync Tie Breaker Enhancement The auto_tie_breaker quorum feature of Corosync has been enhanced to provide options for more flexible configuration and modification of tie breaker nodes. Users can now select a list of nodes that will retain a quorum in case of an even cluster split, or choose that a quorum will be retained by the node with the lowest node ID or the highest node ID. Enhancements for Red Hat High Availability For the Red Hat Enterprise Linux 7.1 release, the Red Hat High Availability Add-On supports the following features. For information on these features, see the High Availability Add-On Reference manual. The pcs resource cleanup command can now reset the resource status and failcount for all resources. You can specify a lifetime parameter for the pcs resource move command to indicate a period of time that the resource constraint this command creates will remain in effect. You can use the pcs acl command to set permissions for local users to allow read-only or read-write access to the cluster configuration by using access control lists (ACLs). The pcs constraint command now supports the configuration of specific constraint options in addition to general resource options. The pcs resource create command supports the disabled parameter to indicate that the resource being created is not started automatically. The pcs cluster quorum unblock command prevents the cluster from waiting for all nodes when establishing a quorum. You can configure resource group order with the before and after parameters of the pcs resource create command. You can back up the cluster configuration in a tarball and restore the cluster configuration files on all nodes from backup with the backup and restore options of the pcs config command.
|
[
"[token + (amount of nodes - 2)] * token_coefficient"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-clustering
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.