title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 43. Writing Handlers | Chapter 43. Writing Handlers Abstract JAX-WS provides a flexible plug-in framework for adding message processing modules to an application. These modules, known as handlers, are independent of the application level code and can provide low-level message processing capabilities. 43.1. Handlers: An Introduction Overview When a service proxy invokes an operation on a service, the operation's parameters are passed to Apache CXF where they are built into a message and placed on the wire. When the message is received by the service, Apache CXF reads the message from the wire, reconstructs the message, and then passes the operation parameters to the application code responsible for implementing the operation. When the application code is finished processing the request, the reply message undergoes a similar chain of events on its trip to the service proxy that originated the request. This is shown in Figure 43.1, "Message Exchange Path" . Figure 43.1. Message Exchange Path JAX-WS defines a mechanism for manipulating the message data between the application level code and the network. For example, you might want the message data passed over the open network to be encrypted using a proprietary encryption mechanism. You could write a JAX-WS handler that encrypted and decrypted the data. Then you could insert the handler into the message processing chains of all clients and servers. As shown in Figure 43.2, "Message Exchange Path with Handlers" , the handlers are placed in a chain that is traversed between the application level code and the transport code that places the message onto the network. Figure 43.2. Message Exchange Path with Handlers Handler types The JAX-WS specification defines two basic handler types: Logical Handler Logical handlers can process the message payload and the properties stored in the message context. For example, if the application uses pure XML messages, the logical handlers have access to the entire message. If the application uses SOAP messages, the logical handlers have access to the contents of the SOAP body. They do not have access to either the SOAP headers or any attachments unless they were placed into the message context. Logical handlers are placed closest to the application code on the handler chain. This means that they are executed first when a message is passed from the application code to the transport. When a message is received from the network and passed back to the application code, the logical handlers are executed last. Protocol Handler Protocol handlers can process the entire message received from the network and the properties stored in the message context. For example, if the application uses SOAP messages, the protocol handlers would have access to the contents of the SOAP body, the SOAP headers, and any attachments. Protocol handlers are placed closest to the transport on the handler chain. This means that they are executed first when a message is received from the network. When a message is sent to the network from the application code, the protocol handlers are executed last. Note The only protocol handler supported by Apache CXF is specific to SOAP. Implementation of handlers The differences between the two handler types are very subtle and they share a common base interface. Because of their common parentage, logical handlers and protocol handlers share a number of methods that must be implemented, including: handleMessage() The handleMessage() method is the central method in any handler. It is the method responsible for processing normal messages. handleFault() handleFault() is the method responsible for processing fault messages. close() close() is called on all executed handlers in a handler chain when a message has reached the end of the chain. It is used to clean up any resources consumed during message processing. The differences between the implementation of a logical handler and the implementation of a protocol handler revolve around the following: The specific interface that is implemented All handlers implement an interface that derives from the Handler interface. Logical handlers implement the LogicalHandler interface. Protocol handlers implement protocol specific extensions of the Handler interface. For example, SOAP handlers implement the SOAPHandler interface. The amount of information available to the handler Protocol handlers have access to the contents of messages and all of the protocol specific information that is packaged with the message content. Logical handlers can only access the contents of the message. Logical handlers have no knowledge of protocol details. Adding handlers to an application To add a handler to an application you must do the following: Determine whether the handler is going to be used on the service providers, the consumers, or both. Determine which type of handler is the most appropriate for the job. Implement the proper interface. To implement a logical handler see Section 43.2, "Implementing a Logical Handler" . To implement a protocol handler see Section 43.4, "Implementing a Protocol Handler" . Configure your endpoint(s) to use the handlers. See Section 43.10, "Configuring Endpoints to Use Handlers" . 43.2. Implementing a Logical Handler Overview Logical handlers implement the javax.xml.ws.handler.LogicalHandler interface. The LogicalHandler interface, shown in Example 43.1, "LogicalHandler Synopsis" passes a LogicalMessageContext object to the handleMessage() method and the handleFault() method. The context object provides access to the body of the message and to any properties set into the message exchange's context. Example 43.1. LogicalHandler Synopsis Procedure To implement a logical hander you do the following: Implement any Section 43.6, "Initializing a Handler" logic required by the handler. Implement the Section 43.3, "Handling Messages in a Logical Handler" logic. Implement the Section 43.7, "Handling Fault Messages" logic. Implement the logic for Section 43.8, "Closing a Handler" the handler when it is finished. Implement any logic for Section 43.9, "Releasing a Handler" the handler's resources before it is destroyed. 43.3. Handling Messages in a Logical Handler Overview Normal message processing is handled by the handleMessage() method. The handleMessage() method receives a LogicalMessageContext object that provides access to the message body and any properties stored in the message context. The handleMessage() method returns either true or false depending on how message processing is to continue. It can also throw an exception. Getting the message data The LogicalMessageContext object passed into logical message handlers allows access to the message body using the context's getMessage() method. The getMessage() method, shown in Example 43.2, "Method for Getting the Message Payload in a Logical Handler" , returns the message payload as a LogicalMessage object. Example 43.2. Method for Getting the Message Payload in a Logical Handler LogicalMessage getMessage Once you have the LogicalMessage object, you can use it to manipulate the message body. The LogicalMessage interface, shown in Example 43.3, "Logical Message Holder" , has getters and setters for working with the actual message body. Example 43.3. Logical Message Holder LogicalMessage Source getPayload Object getPayload JAXBContext context setPayload Object payload JAXBContext context setPayload Source payload Important The contents of the message payload are determined by the type of binding in use. The SOAP binding only allows access to the SOAP body of the message. The XML binding allows access to the entire message body. Working with the message body as an XML object One pair of getters and setters of the logical message work with the message payload as a javax.xml.transform.dom.DOMSource object. The getPayload() method that has no parameters returns the message payload as a DOMSource object. The returned object is the actual message payload. Any changes made to the returned object change the message body immediately. You can replace the body of the message with a DOMSource object using the setPayload() method that takes the single Source object. Working with the message body as a JAXB object The other pair of getters and setters allow you to work with the message payload as a JAXB object. They use a JAXBContext object to transform the message payload into JAXB objects. To use the JAXB objects you do the following: Get a JAXBContext object that can manage the data types in the message body. For information on creating a JAXBContext object see Chapter 39, Using A JAXBContext Object . Get the message body as shown in Example 43.4, "Getting the Message Body as a JAXB Object" . Example 43.4. Getting the Message Body as a JAXB Object Cast the returned object to the proper type. Manipulate the message body as needed. Put the updated message body back into the context as shown in Example 43.5, "Updating the Message Body Using a JAXB Object" . Example 43.5. Updating the Message Body Using a JAXB Object Working with context properties The logical message context passed into a logical handler is an instance of the application's message context and can access all of the properties stored in it. Handlers have access to properties at both the APPLICATION scope and the HANDLER scope. Like the application's message context, the logical message context is a subclass of Java Map. To access the properties stored in the context, you use the get() method and put() method inherited from the Map interface. By default, any properties you set in the message context from inside a logical handler are assigned a scope of HANDLER . If you want the application code to be able to access the property you need to use the context's setScope() method to explicitly set the property's scope to APPLICATION. For more information on working with properties in the message context see Section 42.1, "Understanding Contexts" . Determining the direction of the message It is often important to know the direction a message is passing through the handler chain. For example, you would want to retrieve a security token from incoming requests and attach a security token to an outgoing response. The direction of the message is stored in the message context's outbound message property. You retrieve the outbound message property from the message context using the MessageContext.MESSAGE_OUTBOUND_PROPERTY key as shown in Example 43.6, "Getting the Message's Direction from the SOAP Message Context" . Example 43.6. Getting the Message's Direction from the SOAP Message Context The property is stored as a Boolean object. You can use the object's booleanValue() method to determine the property's value. If the property is set to true, the message is outbound. If the property is set to false the message is inbound. Determining the return value How the handleMessage() method completes its message processing has a direct impact on how message processing proceeds. It can complete by doing one of the following actions: Return true-Returning true signals to the Apache CXF runtime that message processing should continue normally. The handler, if any, has its handleMessage() invoked. Return false-Returning false signals to the Apache CXF runtime that normal message processing must stop. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message stops progressing toward the service's implementation object. Instead, it is sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleMessage() method invoked in the order in which they reside in the chain. When the message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: Message processing stops. All previously invoked message handlers have their close() method invoked. The message is dispatched. Throw a ProtocolException exception-Throwing a ProtocolException exception, or a subclass of this exception, signals the Apache CXF runtime that fault message processing is beginning. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message stops progressing toward the service's implementation object. Instead, it is sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleFault() method invoked in the order in which they reside in the chain. When the fault message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. Message processing stops. All previously invoked message handlers have their close() method invoked. The fault message is dispatched. Throw any other runtime exception-Throwing a runtime exception other than a ProtocolException exception signals the Apache CXF runtime that message processing is to stop. All previously invoked message handlers have the close() method invoked and the exception is dispatched. If the message is part of a request-response message exchange, the exception is dispatched so that it is returned to the consumer that originated the request. Example Example 43.7, "Logical Message Handler Message Processing" shows an implementation of handleMessage() message for a logical message handler that is used by a service consumer. It processes requests before they are sent to the service provider. Example 43.7. Logical Message Handler Message Processing The code in Example 43.7, "Logical Message Handler Message Processing" does the following: Checks if the message is an outbound request. If the message is an outbound request, the handler does additional message processing. Gets the LogicalMessage representation of the message payload from the message context. Gets the actual message payload as a JAXB object. Checks to make sure the request is of the correct type. If it is, the handler continues processing the message. Checks the value of the sum. If it is less than the threshold of 20 then it builds a response and returns it to the client. Builds the response. Returns false to stop message processing and return the response to the client. Throws a runtime exception if the message is not of the correct type. This exception is returned to the client. Returns true if the message is an inbound response or the sum does not meet the threshold. Message processing continues normally. Throws a ProtocolException if a JAXB marshalling error is encountered. The exception is passed back to the client after it is processed by the handleFault() method of the handlers between the current handler and the client. 43.4. Implementing a Protocol Handler Overview Protocol handlers are specific to the protocol in use. Apache CXF provides the SOAP protocol handler as specified by JAX-WS. A SOAP protocol handler implements the javax.xml.ws.handler.soap.SOAPHandler interface. The SOAPHandler interface, shown in Example 43.8, "SOAPHandler Synopsis" , uses a SOAP specific message context that provides access to the message as a SOAPMessage object. It also allows you to access the SOAP headers. Example 43.8. SOAPHandler Synopsis In addition to using a SOAP specific message context, SOAP protocol handlers require that you implement an additional method called getHeaders() . This additional method returns the QNames of the header blocks the handler can process. Procedure To implement a logical hander do the following: Implement any Section 43.6, "Initializing a Handler" logic required by the handler. Implement the Section 43.5, "Handling Messages in a SOAP Handler" logic. Implement the Section 43.7, "Handling Fault Messages" logic. Implement the getHeaders() method. Implement the logic for Section 43.8, "Closing a Handler" the handler when it is finished. Implement any logic for Section 43.9, "Releasing a Handler" the handler's resources before it is destroyed. Implementing the getHeaders() method The getHeaders() , shown in Example 43.9, "The SOAPHander.getHeaders() Method" , method informs the Apache CXF runtime what SOAP headers the handler is responsible for processing. It returns the QNames of the outer element of each SOAP header the handler understands. Example 43.9. The SOAPHander.getHeaders() Method Set<QName> getHeaders For many cases simply returning null is sufficient. However, if the application uses the mustUnderstand attribute of any of the SOAP headers, then it is important to specify the headers understood by the application's SOAP handlers. The runtime checks the set of SOAP headers that all of the registered handlers understand against the list of headers with the mustUnderstand attribute set to true . If any of the flagged headers are not in the list of understood headers, the runtime rejects the message and throws a SOAP must understand exception. 43.5. Handling Messages in a SOAP Handler Overview Normal message processing is handled by the handleMessage() method. The handleMessage() method receives a SOAPMessageContext object that provides access to the message body as a SOAPMessage object and the SOAP headers associated with the message. In addition, the context provides access to any properties stored in the message context. The handleMessage() method returns either true or false depending on how message processing is to continue. It can also throw an exception. Working with the message body You can get the SOAP message using the SOAP message context's getMessage() method. It returns the message as a live SOAPMessage object. Any changes to the message in the handler are automatically reflected in the message stored in the context. If you wish to replace the existing message with a new one, you can use the context's setMessage() method. The setMessage() method takes a SOAPMessage object. Getting the SOAP headers You can access the SOAP message's headers using the SOAPMessage object's getHeader() method. This will return the SOAP header as a SOAPHeader object that you will need to inspect to find the header elements you wish to process. The SOAP message context provides a getHeaders() method, shown in Example 43.10, "The SOAPMessageContext.getHeaders() Method" , that will return an array containing JAXB objects for the specified SOAP headers. Example 43.10. The SOAPMessageContext.getHeaders() Method Ojbect[] getHeaders QName header JAXBContext context boolean allRoles You specify the headers using the QName of their element. You can further limit the headers that are returned by setting the allRoles parameter to false. That instructs the runtime to only return the SOAP headers that are applicable to the active SOAP roles. If no headers are found, the method returns an empty array. For more information about instantiating a JAXBContext object see Chapter 39, Using A JAXBContext Object . Working with context properties The SOAP message context passed into a logical handler is an instance of the application's message context and can access all of the properties stored in it. Handlers have access to properties at both the APPLICATION scope and the Handler scope. Like the application's message context, the SOAP message context is a subclass of Java Map. To access the properties stored in the context, you use the get() method and put() method inherited from the Map interface. By default, any properties you set in the context from inside a logical handler will be assigned a scope of HANDLER . If you want the application code to be able to access the property you need to use the context's setScope() method to explicitly set the property's scope to APPLICATION. For more information on working with properties in the message context see Section 42.1, "Understanding Contexts" . Determining the direction of the message It is often important to know the direction a message is passing through the handler chain. For example, you would want to add headers to an outgoing message and strip headers from an incoming message. The direction of the message is stored in the message context's outbound message property. You retrieve the outbound message property from the message context using the MessageContext.MESSAGE_OUTBOUND_PROPERTY key as shown in Example 43.11, "Getting the Message's Direction from the SOAP Message Context" . Example 43.11. Getting the Message's Direction from the SOAP Message Context The property is stored as a Boolean object. You can use the object's booleanValue() method to determine the property's value. If the property is set to true, the message is outbound. If the property is set to false the message is inbound. Determining the return value How the handleMessage() method completes its message processing has a direct impact on how message processing proceeds. It can complete by doing one of the following actions: return true-Returning true signals to the Apache CXF runtime that message processing should continue normally. The handler, if any, has its handleMessage() invoked. return false-Returning false signals to the Apache CXF runtime that normal message processing is to stop. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message will stop progressing toward the service's implementation object. It will instead be sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleMessage() method invoked in the order in which they reside in the chain. When the message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: Message processing stops. All previously invoked message handlers have their close() method invoked. The message is dispatched. throw a ProtocolException exception-Throwing a ProtocolException exception, or a subclass of this exception, signals the Apache CXF runtime that fault message processing is to start. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message will stop progressing toward the service's implementation object. It will be sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleFault() method invoked in the order in which they reside in the chain. When the fault message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. Message processing stops. All previously invoked message handlers have their close() method invoked. The fault message is dispatched. throw any other runtime exception-Throwing a runtime exception other than a ProtocolException exception signals the Apache CXF runtime that message processing is to stop. All previously invoked message handlers have the close() method invoked and the exception is dispatched. If the message is part of a request-response message exchange the exception is dispatched so that it is returned to the consumer that originated the request. Example Example 43.12, "Handling a Message in a SOAP Handler" shows a handleMessage() implementation that prints the SOAP message to the screen. Example 43.12. Handling a Message in a SOAP Handler The code in Example 43.12, "Handling a Message in a SOAP Handler" does the following: Retrieves the outbound property from the message context. Tests the messages direction and prints the appropriate message. Retrieves the SOAP message from the context. Prints the message to the console. 43.6. Initializing a Handler Overview When the runtime creates an instance of a handler, it creates all of the resources the hander needs to process messages. While you can place all of the logic for doing this in the handler's constructor, it may not be the most appropriate place. The handler framework performs a number of optional steps when it instantiates a handler. You can add resource injection and other initialization logic that will be executed during the optional steps. You do not have to provide any initialization methods for a handler. Order of initialization The Apache CXF runtime initializes a handler in the following manner: The handler's constructor is called. Any resources that are specified by the @Resource annotation are injected. The method decorated with @PostConstruct annotation, if it is present, is called. Note Methods decorated with the @PostConstruct annotation must have a void return type and have no parameters. The handler is place in the Ready state. 43.7. Handling Fault Messages Overview Handlers use the handleFault() method for processing fault messages when a ProtocolException exception is thrown during message processing. The handleFault() method receives either a LogicalMessageContext object or SOAPMessageContext object depending on the type of handler. The received context gives the handler's implementation access to the message payload. The handleFault() method returns either true or false, depending on how fault message processing is to proceed. It can also throw an exception. Getting the message payload The context object received by the handleFault() method is similar to the one received by the handleMessage() method. You use the context's getMessage() method to access the message payload in the same way. The only difference is the payload contained in the context. For more information on working with a LogicalMessageContext see Section 43.3, "Handling Messages in a Logical Handler" . For more information on working with a SOAPMessageContext see Section 43.5, "Handling Messages in a SOAP Handler" . Determining the return value How the handleFault() method completes its message processing has a direct impact on how message processing proceeds. It completes by performing one of the following actions: Return true Returning true signals that fault processing should continue normally. The handleFault() method of the handler in the chain will be invoked. Return false Returning false signals that fault processing stops. The close() method of the handlers that were invoked in processing the current message are invoked and the fault message is dispatched. Throw an exception Throwing an exception stops fault message processing. The close() method of the handlers that were invoked in processing the current message are invoked and the exception is dispatched. Example Example 43.13, "Handling a Fault in a Message Handler" shows an implementation of handleFault() that prints the message body to the screen. Example 43.13. Handling a Fault in a Message Handler 43.8. Closing a Handler When a handler chain is finished processing a message, the runtime calls each executed handler's close() method. This is the appropriate place to clean up any resources that were used by the handler during message processing or resetting any properties to a default state. If a resource needs to persist beyond a single message exchange, you should not clean it up during in the handler's close() method. 43.9. Releasing a Handler Overview The runtime releases a handler when the service or service proxy to which the handler is bound is shutdown. The runtime will invoke an optional release method before invoking the handler's destructor. This optional release method can be used to release any resources used by the handler or perform other actions that would not be appropriate in the handler's destructor. You do not have to provide any clean-up methods for a handler. Order of release The following happens when the handler is released: The handler finishes processing any active messages. The runtime invokes the method decorated with the @PreDestroy annotation. This method should clean up any resources used by the handler. The handler's destructor is called. 43.10. Configuring Endpoints to Use Handlers 43.10.1. Programmatic Configuration 43.10.1.1. Adding a Handler Chain to a Consumer Overview Adding a handler chain to a consumer involves explicitly building the chain of handlers. Then you set the handler chain directly on the service proxy's Binding object. Important Any handler chains configured using the Spring configuration override the handler chains configured programmaticaly. Procedure To add a handler chain to a consumer you do the following: Create a List<Handler> object to hold the handler chain. Create an instance of each handler that will be added to the chain. Add each of the instantiated handler objects to the list in the order they are to be invoked by the runtime. Get the Binding object from the service proxy. Apache CXF provides an implementation of the Binding interface called org.apache.cxf.jaxws.binding.DefaultBindingImpl . Set the handler chain on the proxy using the Binding object's setHandlerChain() method. Example Example 43.14, "Adding a Handler Chain to a Consumer" shows code for adding a handler chain to a consumer. Example 43.14. Adding a Handler Chain to a Consumer The code in Example 43.14, "Adding a Handler Chain to a Consumer" does the following: Instantiates a handler. Creates a List object to hold the chain. Adds the handler to the chain. Gets the Binding object from the proxy as a DefaultBindingImpl object. Assigns the handler chain to the proxy's binding. 43.10.1.2. Adding a Handler Chain to a Service Provider Overview You add a handler chain to a service provider by decorating either the SEI or the implementation class with the @HandlerChain annotation. The annotation points to a meta-data file defining the handler chain used by the service provider. Procedure To add handler chain to a service provider you do the following: Decorate the provider's implementation class with the @HandlerChain annotation. Create a handler configuration file that defines the handler chain. The @HandlerChain annotation The javax.jws.HandlerChain annotation decorates service provider's implementation class. It instructs the runtime to load the handler chain configuration file specified by its file property. The annotation's file property supports two methods for identifying the handler configuration file to load: a URL a relative path name Example 43.15, "Service Implementation that Loads a Handler Chain" shows a service provider implementation that will use the handler chain defined in a file called handlers.xml . handlers.xml must be located in the directory from which the service provider is run. Example 43.15. Service Implementation that Loads a Handler Chain Handler configuration file The handler configuration file defines a handler chain using the XML grammar that accompanies JSR 109 (Web Services for Java EE, Version 1.2). This grammar is defined in the http://java.sun.com/xml/ns/javaee . The root element of the handler configuration file is the handler-chains element. The handler-chains element has one or more handler-chain elements. The handler-chain element define a handler chain. Table 43.1, "Elements Used to Define a Server-Side Handler Chain" describes the handler-chain element's children. Table 43.1. Elements Used to Define a Server-Side Handler Chain Element Description handler Contains the elements that describe a handler. service-name-pattern Specifies the QName of the WSDL service element defining the service to which the handler chain is bound. You can use * as a wildcard when defining the QName. port-name-pattern Specifies the QName of the WSDL port element defining the endpoint to which the handler chain is bound. You can use * as a wildcard when defining the QName. protocol-binding Specifies the message binding for which the handler chain is used. The binding is specified as a URI or using one of the following aliases: #\#SOAP11_HTTP , \##SOAP11_HTTP_MTOM , \##SOAP12_HTTP , \##SOAP12_HTTP_MTOM , or \#\#XML_HTTP . For more information about message binding URIs see Chapter 23, Apache CXF Binding IDs . The handler-chain element is only required to have a single handler element as a child. It can, however, support as many handler elements as needed to define the complete handler chain. The handlers in the chain are executed in the order they specified in the handler chain definition. Important The final order of execution will be determined by sorting the specified handlers into logical handlers and protocol handlers. Within the groupings, the order specified in the configuration will be used. The other children, such as protocol-binding , are used to limit the scope of the defined handler chain. For example, if you use the service-name-pattern element, the handler chain will only be attached to service providers whose WSDL port element is a child of the specified WSDL service element. You can only use one of these limiting children in a handler element. The handler element defines an individual handler in a handler chain. Its handler-class child element specifies the fully qualified name of the class implementing the handler. The handler element can also have an optional handler-name element that specifies a unique name for the handler. Example 43.16, "Handler Configuration File" shows a handler configuration file that defines a single handler chain. The chain is made up of two handlers. Example 43.16. Handler Configuration File 43.10.2. Spring Configuration Overview The easiest way to configure an endpoint to use a handler chain is to define the chain in the endpoint's configuration. This is done by adding a jaxwxs:handlers child to the element configuring the endpoint. Important A handler chain added through the configuration file takes precedence over a handler chain configured programatically. Procedure To configure an endpoint to load a handler chain you do the following: If the endpoint does not already have a configuration element, add one. For more information on configuring Apache CXF endpoints see Chapter 17, Configuring JAX-WS Endpoints . Add a jaxws:handlers child element to the endpoint's configuration element. For each handler in the chain, add a bean element specifying the class that implements the handler. If your handler implementation is used in more than one place you can reference a bean element using the ref element. The handlers element The jaxws:handlers element defines a handler chain in an endpoint's configuration. It can appear as a child to all of the JAX-WS endpoint configuration elements. These are: jaxws:endpoint configures a service provider. jaxws:server also configures a service provider. jaxws:client configures a service consumer. You add handlers to the handler chain in one of two ways: add a bean element defining the implementation class use a ref element to refer to a named bean element from elsewhere in the configuration file The order in which the handlers are defined in the configuration is the order in which they will be executed. The order may be modified if you mix logical handlers and protocol handlers. The run time will sort them into the proper order while maintaining the basic order specified in the configuration. Example Example 43.17, "Configuring an Endpoint to Use a Handler Chain In Spring" shows the configuration for a service provider that loads a handler chain. Example 43.17. Configuring an Endpoint to Use a Handler Chain In Spring | [
"public interface LogicalHandler extends Handler { boolean handleMessage(LogicalMessageContext context); boolean handleFault(LogicalMessageContext context); void close(LogicalMessageContext context); }",
"JAXBContext jaxbc = JAXBContext(myObjectFactory.class); Object body = message.getPayload(jaxbc);",
"message.setPayload(body, jaxbc);",
"Boolean outbound; outbound = (Boolean)smc.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY);",
"public class SmallNumberHandler implements LogicalHandler<LogicalMessageContext> { public final boolean handleMessage(LogicalMessageContext messageContext) { try { boolean outbound = (Boolean)messageContext.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY); if (outbound) { LogicalMessage msg = messageContext.getMessage(); JAXBContext jaxbContext = JAXBContext.newInstance(ObjectFactory.class); Object payload = msg.getPayload(jaxbContext); if (payload instanceof JAXBElement) { payload = ((JAXBElement)payload).getValue(); } if (payload instanceof AddNumbers) { AddNumbers req = (AddNumbers)payload; int a = req.getArg0(); int b = req.getArg1(); int answer = a + b; if (answer < 20) { AddNumbersResponse resp = new AddNumbersResponse(); resp.setReturn(answer); msg.setPayload(new ObjectFactory().createAddNumbersResponse(resp), jaxbContext); return false; } } else { throw new WebServiceException(\"Bad Request\"); } } return true; } catch (JAXBException ex) { throw new ProtocolException(ex); } } }",
"public interface SOAPHandler extends Handler { boolean handleMessage(SOAPMessageContext context); boolean handleFault(SOAPMessageContext context); void close(SOAPMessageContext context); Set<QName> getHeaders() }",
"Boolean outbound; outbound = (Boolean)smc.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY);",
"public boolean handleMessage(SOAPMessageContext smc) { PrintStream out; Boolean outbound = (Boolean)smc.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY); if (outbound.booleanValue()) { out.println(\"\\nOutbound message:\"); } else { out.println(\"\\nInbound message:\"); } SOAPMessage message = smc.getMessage(); message.writeTo(out); out.println(); return true; }",
"public final boolean handleFault(LogicalMessageContext messageContext) { System.out.println(\"handleFault() called with message:\"); LogicalMessage msg=messageContext.getMessage(); System.out.println(msg.getPayload()); return true; }",
"import javax.xml.ws.BindingProvider; import javax.xml.ws.handler.Handler; import java.util.ArrayList; import java.util.List; import org.apache.cxf.jaxws.binding.DefaultBindingImpl; SmallNumberHandler sh = new SmallNumberHandler(); List<Handler> handlerChain = new ArrayList<Handler>(); handlerChain.add(sh); DefaultBindingImpl binding = ((BindingProvider)proxy).getBinding(); binding.getBinding().setHandlerChain(handlerChain);",
"import javax.jws.HandlerChain; import javax.jws.WebService; @WebService(name = \"AddNumbers\", targetNamespace = \"http://apache.org/handlers\", portName = \"AddNumbersPort\", endpointInterface = \"org.apache.handlers.AddNumbers\", serviceName = \"AddNumbersService\") @HandlerChain(file = \"handlers.xml\") public class AddNumbersImpl implements AddNumbers { }",
"<handler-chains xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee\"> <handler-chain> <handler> <handler-name>LoggingHandler</handler-name> <handler-class>demo.handlers.common.LoggingHandler</handler-class> </handler> <handler> <handler-name>AddHeaderHandler</handler-name> <handler-class>demo.handlers.common.AddHeaderHandler</handler-class> </handler> </handler-chain> </handler-chains>",
"<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <jaxws:endpoint id=\"HandlerExample\" implementor=\"org.apache.cxf.example.DemoImpl\" address=\"http://localhost:8080/demo\"> <jaxws:handlers> <bean class=\"demo.handlers.common.LoggingHandler\" /> <bean class=\"demo.handlers.common.AddHeaderHandler\" /> </jaxws:handlers> </jaws:endpoint> </beans>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSHandlers |
Chapter 7. Configuring the database | Chapter 7. Configuring the database This chapter explains how to configure the Red Hat build of Keycloak server to store data in a relational database. 7.1. Supported databases The server has built-in support for different databases. You can query the available databases by viewing the expected values for the db configuration option. The following table lists the supported databases and their tested versions. Database Option value Tested Version MariaDB Server mariadb 10.11 Microsoft SQL Server mssql 2022 MySQL mysql 8.0 Oracle Database oracle 19.3 PostgreSQL postgres 16 By default, the server uses the dev-file database. This is the default database that the server will use to persist data and only exists for development use-cases. The dev-file database is not suitable for production use-cases, and must be replaced before deploying to production. 7.2. Installing a database driver Database drivers are shipped as part of Red Hat build of Keycloak except for the Oracle Database and Micrsoft SQL Server drivers which need to be installed separately. Install the necessary driver if you want to connect to one of these databases or skip this section if you want to connect to a different database for which the database driver is already included. 7.2.1. Installing the Oracle Database driver To install the Oracle Database driver for Red Hat build of Keycloak: Download the ojdbc11 and orai18n JAR files from one of the following sources: Zipped JDBC driver and Companion Jars version 23.2.0.0 from the Oracle driver download page . Maven Central via ojdbc11 and orai18n . Installation media recommended by the database vendor for the specific database in use. When running the unzipped distribution: Place the ojdbc11 and orai18n JAR files in Red Hat build of Keycloak's providers folder When running containers: Build a custom Red Hat build of Keycloak image and add the JARs in the providers folder. When building a custom image for the Keycloak Operator, those images need to be optimized images with all build-time options of Keycloak set. A minimal Dockerfile to build an image which can be used with the Red Hat build of Keycloak Operator and includes Oracle Database JDBC drivers downloaded from Maven Central looks like the following: FROM registry.redhat.io/rhbk/keycloak-rhel9:22 ADD --chown=keycloak:keycloak https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.2.0.0/ojdbc11-23.2.0.0.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.2.0.0/orai18n-23.2.0.0.jar /opt/keycloak/providers/orai18n.jar # Setting the build parameter for the database: ENV KC_DB=oracle # Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # To be able to use the image with the {project_name} Operator, it needs to be optimized, which requires {project_name}'s build step: RUN /opt/keycloak/bin/kc.sh build See the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images. Then continue configuring the database as described in the section. 7.2.2. Installing the Microsoft SQL Server driver To install the Microsoft SQL Server driver for Red Hat build of Keycloak: Download the mssql-jdbc JAR file from one of the following sources: Download a version from the Microsoft JDBC Driver for SQL Server page . Maven Central via mssql-jdbc . Installation media recommended by the database vendor for the specific database in use. When running the unzipped distribution: Place the mssql-jdbc in Red Hat build of Keycloak's providers folder When running containers: Build a custom Red Hat build of Keycloak image and add the JARs in the providers folder. When building a custom image for the Red Hat build of Keycloak Operator, those images need to be optimized images with all build-time options of Red Hat build of Keycloak set. A minimal Dockerfile to build an image which can be used with the Red Hat build of Keycloak Operator and includes Microsoft SQL Server JDBC drivers downloaded from Maven Central looks like the following: FROM registry.redhat.io/rhbk/keycloak-rhel9:22 ADD --chown=keycloak:keycloak https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.2.0.jre11/mssql-jdbc-12.2.0.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar # Setting the build parameter for the database: ENV KC_DB=mssql # Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # To be able to use the image with the {project_name} Operator, it needs to be optimized, which requires {project_name}'s build step: RUN /opt/keycloak/bin/kc.sh build See the Running Red Hat build of Keycloak in a container chapter for details on how to build optimized images. Then continue configuring the database as described in the section. 7.3. Configuring a database For each supported database, the server provides some opinionated defaults to simplify database configuration. You complete the configuration by providing some key settings such as the database host and credentials. Start the server and set the basic options to configure a database bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me This command includes the minimum settings needed to connect to the database. The default schema is keycloak , but you can change it by using the db-schema configuration option. Warning Do NOT use the --optimized flag for the start command if you want to use a particular DB (except the H2). Executing the build phase before starting the server instance is necessary. You can achieve it either by starting the instance without the --optimized flag, or by executing the build command before the optimized start. For more information, see Configuring Red Hat build of Keycloak . 7.4. Overriding default connection settings The server uses JDBC as the underlying technology to communicate with the database. If the default connection settings are insufficient, you can specify a JDBC URL using the db-url configuration option. The following is a sample command for a PostgreSQL database. bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase Be aware that you need to escape characters when invoking commands containing special shell characters such as ; using the CLI, so you might want to set it in the configuration file instead. 7.5. Overriding the default JDBC driver The server uses a default JDBC driver accordingly to the database you chose. To set a different driver you can set the db-driver with the fully qualified class name of the JDBC driver: bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver Regardless of the driver you set, the default driver is always available at runtime. Only set this property if you really need to. For instance, when leveraging the capabilities from a JDBC Driver Wrapper for a specific cloud database service. 7.6. Configuring Unicode support for the database Unicode support for all fields depends on whether the database allows VARCHAR and CHAR fields to use the Unicode character set. If these fields can be set, Unicode is likely to work, usually at the expense of field length. If the database only supports Unicode in the NVARCHAR and NCHAR fields, Unicode support for all text fields is unlikely to work because the server schema uses VARCHAR and CHAR fields extensively. The database schema provides support for Unicode strings only for the following special fields: Realms : display name, HTML display name, localization texts (keys and values) Federation Providers: display name Users : username, given name, last name, attribute names and values Groups : name, attribute names and values Roles : name Descriptions of objects Otherwise, characters are limited to those contained in database encoding, which is often 8-bit. However, for some database systems, you can enable UTF-8 encoding of Unicode characters and use the full Unicode character set in all text fields. For a given database, this choice might result in a shorter maximum string length than the maximum string length supported by 8-bit encodings. 7.6.1. Configuring Unicode support for an Oracle database Unicode characters are supported in an Oracle database if the database was created with Unicode support in the VARCHAR and CHAR fields. For example, you configured AL32UTF8 as the database character set. In this case, the JDBC driver requires no special settings. If the database was not created with Unicode support, you need to configure the JDBC driver to support Unicode characters in the special fields. You configure two properties. Note that you can configure these properties as system properties or as connection properties. Set oracle.jdbc.defaultNChar to true . Optionally, set oracle.jdbc.convertNcharLiterals to true . Note For details on these properties and any performance implications, see the Oracle JDBC driver configuration documentation. 7.6.2. Unicode support for a Microsoft SQL Server database Unicode characters are supported only for the special fields for a Microsoft SQL Server database. The database requires no special settings. The sendStringParametersAsUnicode property of JDBC driver should be set to false to significantly improve performance. Without this parameter, the Microsoft SQL Server might be unable to use indexes. 7.6.3. Configuring Unicode support for a MySQL database Unicode characters are supported in a MySQL database if the database was created with Unicode support in the VARCHAR and CHAR fields when using the CREATE DATABASE command. Note that the utf8mb4 character set is not supported due to different storage requirements for the utf8 character set. See MySQL documentation for details. In that situation, the length restriction on non-special fields does not apply because columns are created to accommodate the number of characters, not bytes. If the database default character set does not allow Unicode storage, only the special fields allow storing Unicode values. Start MySQL Server. Under JDBC driver settings, locate the JDBC connection settings . Add this connection property: characterEncoding=UTF-8 7.6.4. Configuring Unicode support for a PostgreSQL database Unicode is supported for a PostgreSQL database when the database character set is UTF8. Unicode characters can be used in any field with no reduction of field length for non-special fields. The JDBC driver requires no special settings. The character set is determined when the PostgreSQL database is created. Check the default character set for a PostgreSQL cluster by entering the following SQL command. If the default character set is not UTF 8, create the database with the UTF8 as the default character set using a command such as: 7.7. Changing database locking timeout in a cluster configuration Because cluster nodes can boot concurrently, they take extra time for database actions. For example, a booting server instance may perform some database migration, importing, or first time initializations. A database lock prevents start actions from conflicting with each other when cluster nodes boot up concurrently. The maximum timeout for this lock is 900 seconds. If a node waits on this lock for more than the timeout, the boot fails. The need to change the default value is unlikely, but you can change it by entering this command: bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900 7.8. Using Database Vendors without XA transaction support Red Hat build of Keycloak uses XA transactions and the appropriate database drivers by default. Certain vendors, such as Azure SQL and MariaDB Galera, do not support or rely on the XA transaction mechanism. To use Keycloak without XA transaction support using the appropriate JDBC driver, enter the following command: bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=false Red Hat build of Keycloak automatically chooses the appropriate JDBC driver for your vendor. 7.9. Setting JPA provider configuration option for migrationStrategy To setup the JPA migrationStrategy (manual/update/validate) you should setup JPA provider as follows: Setting the migration-strategy for the quarkus provider of the connections-jpa SPI bin/kc.[sh|bat] start --spi-connections-jpa-legacy-migration-strategy=manual If you want to get a SQL file for DB initialization, too, you have to add this additional SPI initializeEmpty (true/false): Setting the initialize-empty for the quarkus provider of the connections-jpa SPI bin/kc.[sh|bat] start --spi-connections-jpa-legacy-initialize-empty=false In the same way the migrationExport to point to a specific file and location: Setting the migration-export for the quarkus provider of the connections-jpa SPI bin/kc.[sh|bat] start --spi-connections-jpa-legacy-migration-export=<path>/<file.sql> 7.10. Relevant options Value db 🛠 The database vendor. CLI: --db Env: KC_DB dev-file (default), dev-mem , mariadb , mssql , mysql , oracle , postgres db-driver The fully qualified class name of the JDBC driver. If not set, a default driver is set accordingly to the chosen database. CLI: --db-driver Env: KC_DB_DRIVER db-password The password of the database user. CLI: --db-password Env: KC_DB_PASSWORD db-pool-initial-size The initial size of the connection pool. CLI: --db-pool-initial-size Env: KC_DB_POOL_INITIAL_SIZE db-pool-max-size The maximum size of the connection pool. CLI: --db-pool-max-size Env: KC_DB_POOL_MAX_SIZE 100 (default) db-pool-min-size The minimal size of the connection pool. CLI: --db-pool-min-size Env: KC_DB_POOL_MIN_SIZE db-schema The database schema to be used. CLI: --db-schema Env: KC_DB_SCHEMA db-url The full database JDBC URL. If not provided, a default URL is set based on the selected database vendor. For instance, if using postgres , the default JDBC URL would be jdbc:postgresql://localhost/keycloak . CLI: --db-url Env: KC_DB_URL db-url-database Sets the database name of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-database Env: KC_DB_URL_DATABASE db-url-host Sets the hostname of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-host Env: KC_DB_URL_HOST db-url-port Sets the port of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-port Env: KC_DB_URL_PORT db-url-properties Sets the properties of the default JDBC URL of the chosen vendor. Make sure to set the properties accordingly to the format expected by the database vendor, as well as appending the right character at the beginning of this property value. If the db-url option is set, this option is ignored. CLI: --db-url-properties Env: KC_DB_URL_PROPERTIES db-username The username of the database user. CLI: --db-username Env: KC_DB_USERNAME transaction-xa-enabled 🛠 If set to false, Keycloak uses a non-XA datasource in case the database does not support XA transactions. CLI: --transaction-xa-enabled Env: KC_TRANSACTION_XA_ENABLED true (default), false | [
"FROM registry.redhat.io/rhbk/keycloak-rhel9:22 ADD --chown=keycloak:keycloak https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.2.0.0/ojdbc11-23.2.0.0.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.2.0.0/orai18n-23.2.0.0.jar /opt/keycloak/providers/orai18n.jar Setting the build parameter for the database: ENV KC_DB=oracle Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the {project_name} Operator, it needs to be optimized, which requires {project_name}'s build step: RUN /opt/keycloak/bin/kc.sh build",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:22 ADD --chown=keycloak:keycloak https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.2.0.jre11/mssql-jdbc-12.2.0.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar Setting the build parameter for the database: ENV KC_DB=mssql Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the {project_name} Operator, it needs to be optimized, which requires {project_name}'s build step: RUN /opt/keycloak/bin/kc.sh build",
"bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me",
"bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase",
"bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver",
"show server_encoding;",
"create database keycloak with encoding 'UTF8';",
"bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900",
"bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=false",
"bin/kc.[sh|bat] start --spi-connections-jpa-legacy-migration-strategy=manual",
"bin/kc.[sh|bat] start --spi-connections-jpa-legacy-initialize-empty=false",
"bin/kc.[sh|bat] start --spi-connections-jpa-legacy-migration-export=<path>/<file.sql>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/db- |
Chapter 69. security | Chapter 69. security This chapter describes the commands under the security command. 69.1. security group create Create a new security group Usage: Table 69.1. Positional arguments Value Summary <name> New security group name Table 69.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Security group description --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --tag <tag> Tag to be added to the security group (repeat option to set multiple tags) --no-tag No tags associated with the security group Table 69.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 69.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 69.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 69.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 69.2. security group delete Delete security group(s) Usage: Table 69.7. Positional arguments Value Summary <group> Security group(s) to delete (name or id) Table 69.8. Command arguments Value Summary -h, --help Show this help message and exit 69.3. security group list List security groups Usage: Table 69.9. Command arguments Value Summary -h, --help Show this help message and exit --project <project> List security groups according to the project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --tags <tag>[,<tag>,... ] List security group which have all given tag(s) (Comma-separated list of tags) --any-tags <tag>[,<tag>,... ] List security group which have any given tag(s) (Comma-separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude security group which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude security group which have any given tag(s) (Comma-separated list of tags) Table 69.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 69.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 69.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 69.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 69.4. security group rule create Create a new security group rule Usage: Table 69.14. Positional arguments Value Summary <group> Create rule in this security group (name or id) Table 69.15. Command arguments Value Summary -h, --help Show this help message and exit --remote-ip <ip-address> Remote ip address block (may use cidr notation; default for IPv4 rule: 0.0.0.0/0, default for IPv6 rule: ::/0) --remote-group <group> Remote security group (name or id) --description <description> Set security group rule description --dst-port <port-range> Destination port, may be a single port or a starting and ending port range: 137:139. Required for IP protocols TCP and UDP. Ignored for ICMP IP protocols. --icmp-type <icmp-type> Icmp type for icmp ip protocols --icmp-code <icmp-code> Icmp code for icmp ip protocols --protocol <protocol> Ip protocol (ah, dccp, egp, esp, gre, icmp, igmp, ipv6-encap, ipv6-frag, ipv6-icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, udplite, vrrp and integer representations [0-255] or any; default: any (all protocols)) --ingress Rule applies to incoming network traffic (default) --egress Rule applies to outgoing network traffic --ethertype <ethertype> Ethertype of network traffic (ipv4, ipv6; default: based on IP protocol) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 69.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 69.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 69.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 69.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 69.5. security group rule delete Delete security group rule(s) Usage: Table 69.20. Positional arguments Value Summary <rule> Security group rule(s) to delete (id only) Table 69.21. Command arguments Value Summary -h, --help Show this help message and exit 69.6. security group rule list List security group rules Usage: Table 69.22. Positional arguments Value Summary <group> List all rules in this security group (name or id) Table 69.23. Command arguments Value Summary -h, --help Show this help message and exit --protocol <protocol> List rules by the ip protocol (ah, dhcp, egp, esp, gre, icmp, igmp, ipv6-encap, ipv6-frag, ipv6-icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, udplite, vrrp and integer representations [0-255] or any; default: any (all protocols)) --ethertype <ethertype> List rules by the ethertype (ipv4 or ipv6) --ingress List rules applied to incoming network traffic --egress List rules applied to outgoing network traffic --long List additional fields in output Table 69.24. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 69.25. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 69.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 69.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 69.7. security group rule show Display security group rule details Usage: Table 69.28. Positional arguments Value Summary <rule> Security group rule to display (id only) Table 69.29. Command arguments Value Summary -h, --help Show this help message and exit Table 69.30. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 69.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 69.32. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 69.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 69.8. security group set Set security group properties Usage: Table 69.34. Positional arguments Value Summary <group> Security group to modify (name or id) Table 69.35. Command arguments Value Summary -h, --help Show this help message and exit --name <new-name> New security group name --description <description> New security group description --tag <tag> Tag to be added to the security group (repeat option to set multiple tags) --no-tag Clear tags associated with the security group. specify both --tag and --no-tag to overwrite current tags 69.9. security group show Display security group details Usage: Table 69.36. Positional arguments Value Summary <group> Security group to display (name or id) Table 69.37. Command arguments Value Summary -h, --help Show this help message and exit Table 69.38. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 69.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 69.40. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 69.41. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 69.10. security group unset Unset security group properties Usage: Table 69.42. Positional arguments Value Summary <group> Security group to modify (name or id) Table 69.43. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag to be removed from the security group (repeat option to remove multiple tags) --all-tag Clear all tags associated with the security group | [
"openstack security group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--project <project>] [--project-domain <project-domain>] [--tag <tag> | --no-tag] <name>",
"openstack security group delete [-h] <group> [<group> ...]",
"openstack security group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--project <project>] [--project-domain <project-domain>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]",
"openstack security group rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--remote-ip <ip-address> | --remote-group <group>] [--description <description>] [--dst-port <port-range>] [--icmp-type <icmp-type>] [--icmp-code <icmp-code>] [--protocol <protocol>] [--ingress | --egress] [--ethertype <ethertype>] [--project <project>] [--project-domain <project-domain>] <group>",
"openstack security group rule delete [-h] <rule> [<rule> ...]",
"openstack security group rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--protocol <protocol>] [--ethertype <ethertype>] [--ingress | --egress] [--long] [<group>]",
"openstack security group rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <rule>",
"openstack security group set [-h] [--name <new-name>] [--description <description>] [--tag <tag>] [--no-tag] <group>",
"openstack security group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <group>",
"openstack security group unset [-h] [--tag <tag> | --all-tag] <group>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/security |
Chapter 97. Micrometer | Chapter 97. Micrometer Since Camel 2.22 Only producer is supported The Micrometer component allows to collect various metrics directly from Camel routes. Supported metric types are counter , summary , and timer . Micrometer provides simple way to measure the behavior of an application. Configurable reporting backends (via Micrometer registries) enable different integration options for collecting and visualizing statistics. The component also provides a MicrometerRoutePolicyFactory which allows to expose route statistics using Micrometer as well as EventNotifier implementations for counting routes and timing exchanges from their creation to their completion. 97.1. Dependencies Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-micrometer</artifactId> </dependency> At the same time update dependencyManagement section with: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 97.2. URI format 97.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 97.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 97.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 97.4. Component Options The Micrometer component supports 3 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. True Boolean metricsRegistry (advanced) To use a custom configured MetricRegistry. MeterRegistry 97.5. Endpoint Options The Micrometer endpoint is configured using URI syntax: with the following path and query parameters: 97.5.1. Path Parameters (3 parameters) Name Description Default Type metricsType (producer) Required Type of metrics. Enum values: counter summary timer Type metricsName (producer) Required Name of metrics. String tags (producer) Tags of metrics. Iterable 97.5.2. Query Parameters (6 parameters) Name Description Default Type action (producer) Action expression when using timer type. Enum values: start stop String decrement (producer) Decrement value expression when using counter type. String increment (producer) Increment value expression when using counter type. boolean metricsDescription (producer) Description of metrics. String value (producer) Value expression when using histogram type. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean 97.6. Message Headers The Micrometer component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelMetricsTimerAction (producer) Constant: HEADER_TIMER_ACTION Override timer action in URI. Enum values: start stop MicrometerTimerAction CamelMetricsHistogramValue (producer) Constant: HEADER_HISTOGRAM_VALUE Override histogram value in URI. Long CamelMetricsCounterDecrement (producer) Constant: HEADER_COUNTER_DECREMENT Override decrement value in URI. Double CamelMetricsCounterIncrement (producer) Constant: HEADER_COUNTER_INCREMENT Override increment value in URI. Double CamelMetricsName (producer) Constant: HEADER_METRIC_NAME Override name value in URI. String CamelMetricsDescription (producer) Constant: HEADER_METRIC_DESCRIPTION Override description value in URI. String CamelMetricsTags (producer) Constant: HEADER_METRIC_TAGS To augment meter tags defined as URI parameters. Iterable 97.7. Meter Registry By default the Camel Micrometer component creates a SimpleMeterRegistry instance, suitable mainly for testing. You should define a dedicated registry by providing a MeterRegistry bean. Micrometer registries primarily determine the backend monitoring system to be used. A CompositeMeterRegistry can be used to address more than one monitoring target. For example using Spring Java Configuration: @Configuration public static class MyConfig extends SingleRouteCamelConfiguration { @Bean @Override public RouteBuilder route() { return new RouteBuilder() { @Override public void configure() throws Exception { // define Camel routes here } }; } @Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry registry = ...; registry.add(...); // ... return registry; } } Or using CDI: @Override public void configure() { from("...") // Register the 'my-meter' meter in the MetricRegistry below .to("micrometer:meter:my-meter"); } @Produces // If multiple MetricRegistry beans // @Named(MicrometerConstants.METRICS_REGISTRY_NAME) MetricRegistry registry() { CompositeMeterRegistry registry = ...; registry.add(...); // ... return registry; } } 97.8. Default Camel Metrics Some Camel specific metrics are available out of the box. Name Type Description camel.message.history Timer Sample of performance of each node in the route when message history is enabled camel.routes.added Gauge Number of routes added camel.routes.running Gauge Number of routes running camel.exchanges.inflight Gauge Route inflight messages camel.exchanges.total Counter Total number of processed exchanges camel.exchanges.succeeded Counter Number of successfully completed exchange camel.exchanges.failed Counter Number of failed exchanges camel.exchanges.failures.handled Counter Number of failures handled camel.exchanges.external.redeliveries Counter Number of external initiated redeliveries (such as from JMS broker) camel.exchange.event.notifier Gauge + Summary Metrics for message created, sent, completed, and failed events camel.route.policy Gauge + Summary Route performance metrics camel.route.policy.long.task Gauge + Summary Route long task metric 97.9. Usage of producers Each meter has type and name. Supported types are counter , distribution summary and timer. If no type is provided then a counter is used by default. The meter name is a string that is evaluated as Simple expression. In addition to using the CamelMetricsName header (see below), this allows to select the meter depending on exchange data. The optional tags URI parameter is a comma-separated string, consisting of key=value expressions. Both key and value are strings that are also evaluated as Simple expression. E.g. the URI parameter tags=X=USD{header.Y} would assign the current value of header Y to the key X . 97.9.1. Headers The meter name defined in URI can be overridden by populating a header with name CamelMetricsName. The meter tags defined as URI parameters can be augmented by populating a header with name CamelMetricsTags. For example: from("direct:in") .setHeader(MicrometerConstants.HEADER_METRIC_NAME, constant("new.name")) .setHeader(MicrometerConstants.HEADER_METRIC_TAGS, constant(Tags.of("dynamic-key", "dynamic-value"))) .to("micrometer:counter:name.not.used?tags=key=value") .to("direct:out"); will update a counter with name new.name instead of name.not.used using the tag dynamic-key with value dynamic-value in addition to the tag key with value value . All Metrics specific headers are removed from the message once the Micrometer endpoint finishes processing of exchange. While processing exchange Micrometer endpoint will catch all exceptions and write log entry using level warn . 97.10. Counter micrometer:counter:name[?options] 97.10.1. Options Name Default Description increment Double value to add to the counter decrement Double value to subtract from the counter If neither increment or decrement is defined then counter value will be incremented by one. If increment and decrement are both defined only increment operation is called. // update counter simple.counter by 7 from("direct:in") .to("micrometer:counter:simple.counter?increment=7") .to("direct:out"); // increment counter simple.counter by 1 from("direct:in") .to("micrometer:counter:simple.counter") .to("direct:out"); Both increment and decrement values are evaluated as Simple expressions with a Double result, e.g. if header X contains a value that evaluates to 3.0, the simple.counter counter is decremented by 3.0: // decrement counter simple.counter by 3 from("direct:in") .to("micrometer:counter:simple.counter?decrement=USD{header.X}") .to("direct:out"); 97.10.2. Headers Like in camel-metrics, specific Message headers can be used to override increment and decrement values specified in the Micrometer endpoint URI. Name Description Expected type CamelMetricsCounterIncrement Double value to add to the counter CamelMetricsCounterDecrement Double value to subtract from the counter from("direct:in") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, constant(417.0D)) .to("micrometer:counter:simple.counter?increment=7") .to("direct:out"); from("direct:in") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, simple("USD{body.length}")) .to("micrometer:counter:body.length") .to("direct:out"); 97.11. Distribution Summary micrometer:summary:metricname[?options] 97.11.1. Options Name Default Description value Value to use in histogram If no value is not set, nothing is added to histogram and warning is logged. // adds value 9923 to simple.histogram from("direct:in") .to("micrometer:summary:simple.histogram?value=9923") .to("direct:out"); // nothing is added to simple.histogram; warning is logged from("direct:in") .to("micrometer:summary:simple.histogram") .to("direct:out"); value is evaluated as Simple expressions with a Double result, e.g. if header X contains a value that evaluates to 3.0, this value is registered with the simple.histogram : from("direct:in") .to("micrometer:summary:simple.histogram?value=USD{header.X}") .to("direct:out"); 97.11.2. Headers Like in camel-metrics , a specific Message header can be used to override the value specified in the Micrometer endpoint URI. Name Description Expected type CamelMetricsHistogramValue Override histogram value in URI Long // adds value 992.0 to simple.histogram from("direct:in") .setHeader(MicrometerConstants.HEADER_HISTOGRAM_VALUE, constant(992.0D)) .to("micrometer:summary:simple.histogram?value=700") .to("direct:out") 97.12. Timer micrometer:timer:metricname[?options] 97.12.1. Options Name Default Description action start or stop If no action or invalid value is provided then warning is logged without any timer update. If action start is called on an already running timer or stop is called on an unknown timer, nothing is updated and warning is logged. // measure time spent in route "direct:calculate" from("direct:in") .to("micrometer:timer:simple.timer?action=start") .to("direct:calculate") .to("micrometer:timer:simple.timer?action=stop"); Timer.Sample objects are stored as Exchange properties between different Metrics component calls. action is evaluated as a Simple expression returning a result of type MicrometerTimerAction . 97.12.2. Headers Like in camel-metrics , a specific Message header can be used to override action value specified in the Micrometer endpoint URI. Name Description Expected type CamelMetricsTimerAction Override timer action in URI org.apache.camel.component.micrometer.MicrometerTimerAction // sets timer action using header from("direct:in") .setHeader(MicrometerConstants.HEADER_TIMER_ACTION, MicrometerTimerAction.start) .to("micrometer:timer:simple.timer") .to("direct:out"); 97.13. Using Micrometer route policy factory MicrometerRoutePolicyFactory allows to add a RoutePolicy for each route in order to exposes route utilization statistics using Micrometer. This factory can be used in Java and XML as the examples below demonstrates. Note Instead of using the MicrometerRoutePolicyFactory you can define a dedicated MicrometerRoutePolicy per route you want to instrument, in case you only want to instrument a few selected routes. From Java you just add the factory to the CamelContext as shown below: context.addRoutePolicyFactory(new MicrometerRoutePolicyFactory()); And from XML DSL you define a <bean> as follows: <!-- use camel-micrometer route policy to gather metrics for all routes --> <bean id="metricsRoutePolicyFactory" class="org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory"/> The MicrometerRoutePolicyFactory and MicrometerRoutePolicy supports the following options: Name Default Description prettyPrint false Whether to use pretty print when outputting statistics in json format meterRegistry Allow to use a shared MeterRegistry . If none is provided then Camel will create a shared instance used by the this CamelContext. durationUnit TimeUnit.MILLISECONDS The unit to use for duration in when dumping the statistics as json. configuration see below MicrometerRoutePolicyConfiguration.class The MicrometerRoutePolicyConfiguration supports the following options: Name Default Description additionalCounters true activates all additional counters exchangesSucceeded true activates counter for succeeded exchanges exchangesFailed true activates counter for failed exchanges exchangesTotal true activates counter for total count of exchanges externalRedeliveries true activates counter for redeliveries of exchanges failuresHandled true activates counter for handled failures longTask false activates long task timer (current processing time for micrometer) timerInitiator null Consumer<Timer.Builder> for custom initialize Timer longTaskInitiator null Consumer<LongTaskTimer.Builder> for custom initialize LongTaskTimer If JMX is enabled in the CamelContext, the MBean is registered in the type=services tree with name=MicrometerRoutePolicy . 97.14. Using Micrometer message history factory MicrometerMessageHistoryFactory allows to use metrics to capture Message History performance statistics while routing messages. It works by using a Micrometer Timer for each node in all the routes. This factory can be used in Java and XML as the examples below demonstrates. From Java you just set the factory to the CamelContext as shown below: context.setMessageHistoryFactory(new MicrometerMessageHistoryFactory()); And from XML DSL you define a <bean> as follows: <!-- use camel-micrometer message history to gather metrics for all messages being routed --> <bean id="metricsMessageHistoryFactory" class="org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory"/> The following options is supported on the factory: Name Default Description prettyPrint false Whether to use pretty print when outputting statistics in json format meterRegistry Allow to use a shared MeterRegistry . If none is provided then Camel will create a shared instance used by the this CamelContext. durationUnit TimeUnit.MILLISECONDS The unit to use for duration in when dumping the statistics as json. At runtime the metrics can be accessed from Java API or JMX which allows to gather the data as json output. From Java code you can get the service from the CamelContext as shown: MicrometerMessageHistoryService service = context.hasService(MicrometerMessageHistoryService.class); String json = service.dumpStatisticsAsJson(); If JMX is enabled in the CamelContext, the MBean is registered in the type=services tree with name=MicrometerMessageHistory . 97.15. Micrometer event notification There is a MicrometerRouteEventNotifier (counting added and running routes) and a MicrometerExchangeEventNotifier (timing exchanges from their creation to their completion). EventNotifiers can be added to the CamelContext, e.g.: camelContext.getManagementStrategy().addEventNotifier(new MicrometerExchangeEventNotifier()) At runtime the metrics can be accessed from Java API or JMX which allows to gather the data as json output. From Java code you can do get the service from the CamelContext as shown: MicrometerEventNotifierService service = context.hasService(MicrometerEventNotifierService.class); String json = service.dumpStatisticsAsJson(); If JMX is enabled in the CamelContext, the MBean is registered in the type=services tree with name=MicrometerEventNotifier . 97.16. Instrumenting Camel Thread Pools InstrumentedThreadPoolFactory allows you to gather performance information about Camel Thread Pools by injecting a InstrumentedThreadPoolFactory which collects information from inside of Camel. See more details at Advanced configuration of CamelContext using Spring. 97.17. Exposing Micrometer statistics in JMX Micrometer uses MeterRegistry implementations in order to publish statistics. While in production scenarios it is advisable to select a dedicated backend like Prometheus or Graphite, it may be sufficient for test or local deployments to publish statistics to JMX. In order to achieve this, add the following dependency: <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-jmx</artifactId> <version>USD{micrometer-version}</version> </dependency> and add a JmxMeterRegistry instance: @Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry meterRegistry = new CompositeMeterRegistry(); meterRegistry.add(...); meterRegistry.add(new JmxMeterRegistry( CamelJmxConfig.DEFAULT, Clock.SYSTEM, HierarchicalNameMapper.DEFAULT)); return meterRegistry; } } The HierarchicalNameMapper strategy determines how meter name and tags are assembled into an MBean name. 97.18. Using Camel Micrometer with Spring Boot When you use camel-micrometer-starter with Spring Boot, then Spring Boot auto configuration will automatically enable metrics capture if a io.micrometer.core.instrument.MeterRegistry is available. For example to capture data with Prometheus, you can add the following dependency: <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> </dependency> See the following table for options to specify what metrics to capture, or to turn it off. 97.19. Spring Boot Auto Configuration Compared to the plain camel micrometer, micrometer component on Spring Boot provides 10 more options, which are listed below: Name Description Default Type camel.component.micrometer.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. True Boolean camel.component.micrometer.enabled Whether to enable auto configuration of the micrometer component. This is enabled by default. Boolean camel.component.micrometer.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean camel.component.micrometer.metrics-registry To use a custom configured MetricRegistry. The option is a io.micrometer.core.instrument.MeterRegistry type. MeterRegistry camel.metrics.enable-exchange-event-notifier Set whether to enable the MicrometerExchangeEventNotifier for capturing metrics on exchange processing times. True Boolean camel.metrics.enable-message-history Set whether to enable the MicrometerMessageHistoryFactory for capturing metrics on individual route node processing times. Depending on the number of configured route nodes, there is the potential to create a large volume of metrics. Therefore, this option is disabled by default. False Boolean camel.metrics.enable-route-event-notifier Set whether to enable the MicrometerRouteEventNotifier for capturing metrics on the total number of routes and total number of routes running. True Boolean camel.metrics.enable-route-policy Set whether to enable the MicrometerRoutePolicyFactory for capturing metrics on route processing times. True Boolean camel.metrics.uri-tag-dynamic Whether to use static or dynamic values for URI tags in captured metrics. When using dynamic tags, then a REST service with base URL: /users/{id} will capture metrics with uri tag with the actual dynamic value such as: /users/123. However, this can lead to many tags as the URI is dynamic, so use this with care. False Boolean camel.metrics.uri-tag-enabled Whether HTTP uri tags should be enabled or not in captured metrics. If disabled then the uri tag, is likely not able to be resolved and will be marked as UNKNOWN. True Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-micrometer</artifactId> </dependency>",
"<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"micrometer:[ counter | summary | timer ]:metricname[?options]",
"micrometer:metricsType:metricsName",
"@Configuration public static class MyConfig extends SingleRouteCamelConfiguration { @Bean @Override public RouteBuilder route() { return new RouteBuilder() { @Override public void configure() throws Exception { // define Camel routes here } }; } @Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry registry = ...; registry.add(...); // return registry; } }",
"@Override public void configure() { from(\"...\") // Register the 'my-meter' meter in the MetricRegistry below .to(\"micrometer:meter:my-meter\"); } @Produces // If multiple MetricRegistry beans // @Named(MicrometerConstants.METRICS_REGISTRY_NAME) MetricRegistry registry() { CompositeMeterRegistry registry = ...; registry.add(...); // return registry; } }",
"from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_METRIC_NAME, constant(\"new.name\")) .setHeader(MicrometerConstants.HEADER_METRIC_TAGS, constant(Tags.of(\"dynamic-key\", \"dynamic-value\"))) .to(\"micrometer:counter:name.not.used?tags=key=value\") .to(\"direct:out\");",
"micrometer:counter:name[?options]",
"// update counter simple.counter by 7 from(\"direct:in\") .to(\"micrometer:counter:simple.counter?increment=7\") .to(\"direct:out\"); // increment counter simple.counter by 1 from(\"direct:in\") .to(\"micrometer:counter:simple.counter\") .to(\"direct:out\");",
"// decrement counter simple.counter by 3 from(\"direct:in\") .to(\"micrometer:counter:simple.counter?decrement=USD{header.X}\") .to(\"direct:out\");",
"from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, constant(417.0D)) .to(\"micrometer:counter:simple.counter?increment=7\") .to(\"direct:out\");",
"from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_COUNTER_INCREMENT, simple(\"USD{body.length}\")) .to(\"micrometer:counter:body.length\") .to(\"direct:out\");",
"micrometer:summary:metricname[?options]",
"// adds value 9923 to simple.histogram from(\"direct:in\") .to(\"micrometer:summary:simple.histogram?value=9923\") .to(\"direct:out\"); // nothing is added to simple.histogram; warning is logged from(\"direct:in\") .to(\"micrometer:summary:simple.histogram\") .to(\"direct:out\");",
"from(\"direct:in\") .to(\"micrometer:summary:simple.histogram?value=USD{header.X}\") .to(\"direct:out\");",
"// adds value 992.0 to simple.histogram from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_HISTOGRAM_VALUE, constant(992.0D)) .to(\"micrometer:summary:simple.histogram?value=700\") .to(\"direct:out\")",
"micrometer:timer:metricname[?options]",
"// measure time spent in route \"direct:calculate\" from(\"direct:in\") .to(\"micrometer:timer:simple.timer?action=start\") .to(\"direct:calculate\") .to(\"micrometer:timer:simple.timer?action=stop\");",
"// sets timer action using header from(\"direct:in\") .setHeader(MicrometerConstants.HEADER_TIMER_ACTION, MicrometerTimerAction.start) .to(\"micrometer:timer:simple.timer\") .to(\"direct:out\");",
"context.addRoutePolicyFactory(new MicrometerRoutePolicyFactory());",
"<!-- use camel-micrometer route policy to gather metrics for all routes --> <bean id=\"metricsRoutePolicyFactory\" class=\"org.apache.camel.component.micrometer.routepolicy.MicrometerRoutePolicyFactory\"/>",
"context.setMessageHistoryFactory(new MicrometerMessageHistoryFactory());",
"<!-- use camel-micrometer message history to gather metrics for all messages being routed --> <bean id=\"metricsMessageHistoryFactory\" class=\"org.apache.camel.component.micrometer.messagehistory.MicrometerMessageHistoryFactory\"/>",
"MicrometerMessageHistoryService service = context.hasService(MicrometerMessageHistoryService.class); String json = service.dumpStatisticsAsJson();",
"camelContext.getManagementStrategy().addEventNotifier(new MicrometerExchangeEventNotifier())",
"MicrometerEventNotifierService service = context.hasService(MicrometerEventNotifierService.class); String json = service.dumpStatisticsAsJson();",
"<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-jmx</artifactId> <version>USD{micrometer-version}</version> </dependency>",
"@Bean(name = MicrometerConstants.METRICS_REGISTRY_NAME) public MeterRegistry getMeterRegistry() { CompositeMeterRegistry meterRegistry = new CompositeMeterRegistry(); meterRegistry.add(...); meterRegistry.add(new JmxMeterRegistry( CamelJmxConfig.DEFAULT, Clock.SYSTEM, HierarchicalNameMapper.DEFAULT)); return meterRegistry; } }",
"<dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-micrometer-component-starter |
Chapter 3. The pcs Command Line Interface | Chapter 3. The pcs Command Line Interface The pcs command line interface controls and configures corosync and Pacemaker by providing an interface to the corosync.conf and cib.xml files. The general format of the pcs command is as follows. 3.1. The pcs Commands The pcs commands are as follows. cluster Configure cluster options and nodes. For information on the pcs cluster command, see Chapter 4, Cluster Creation and Administration . resource Create and manage cluster resources. For information on the pcs cluster command, see Chapter 6, Configuring Cluster Resources , Chapter 8, Managing Cluster Resources , and Chapter 9, Advanced Configuration . stonith Configure fence devices for use with Pacemaker. For information on the pcs stonith command, see Chapter 5, Fencing: Configuring STONITH . constraint Manage resource constraints. For information on the pcs constraint command, see Chapter 7, Resource Constraints . property Set Pacemaker properties. For information on setting properties with the pcs property command, see Chapter 12, Pacemaker Cluster Properties . status View current cluster and resource status. For information on the pcs status command, see Section 3.5, "Displaying Status" . config Display complete cluster configuration in user readable form. For information on the pcs config command, see Section 3.6, "Displaying the Full Cluster Configuration" . | [
"pcs [-f file ] [-h] [ commands ]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-pcscommand-haar |
Chapter 7. LoadBalancer | Chapter 7. LoadBalancer The following table lists all the packages in the Load Balancer add-on. For more information about core packages, see the Scope of Coverage Details document. Package Core Package? License haproxy Yes GPLv2+ ipvsadm Yes GPLv2+ keepalived Yes GPLv2+ piranha Yes GPLv2+ | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/package_manifest/chap-loadbalancer |
Chapter 11. Monitoring project and application metrics using the Developer perspective | Chapter 11. Monitoring project and application metrics using the Developer perspective The Observe view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. 11.1. Prerequisites You have created and deployed applications on OpenShift Container Platform . You have logged in to the web console and have switched to the Developer perspective . 11.2. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure Go to Observe to see the Dashboard , Metrics , Alerts , and Events for your project. Optional: Use the Dashboard tab to see graphs depicting the following application metrics: CPU usage Memory usage Bandwidth consumption Network-related information such as the rate of transmitted and received packets and the rate of dropped packets. In the Dashboard tab, you can access the Kubernetes compute resources dashboards. Note In the Dashboard list, the Kubernetes / Compute Resources / Namespace (Pods) dashboard is selected by default. Use the following options to see further details: Select a dashboard from the Dashboard list to see the filtered metrics. All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Select an option from the Time Range list to determine the time frame for the data being captured. Set a custom time range by selecting Custom time range from the Time Range list. You can input or select the From and To dates and times. Click Save to save the custom time range. Select an option from the Refresh Interval list to determine the time period after which the data is refreshed. Hover your cursor over the graphs to see specific details for your pod. Click Inspect located in the upper-right corner of every graph to see any particular graph details. The graph details appear in the Metrics tab. Optional: Use the Metrics tab to query for the required project metric. Figure 11.1. Monitoring metrics In the Select Query list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. Click Show PromQL to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. Use the drop-down list to set a time range for the data being displayed. You can click Reset Zoom to reset it to the default time range. Optional: In the Select Query list, select Custom Query to create a custom Prometheus query and filter relevant metrics. Optional: Use the Alerts tab to do the following tasks: See the rules that trigger alerts for the applications in your project. Identify the alerts firing in the project. Silence such alerts if required. Figure 11.2. Monitoring alerts Use the following options to see further details: Use the Filter list to filter the alerts by their Alert State and Severity . Click on an alert to go to the details page for that alert. In the Alerts Details page, you can click View Metrics to see the metrics for the alert. Use the Notifications toggle adjoining an alert rule to silence all the alerts for that rule, and then select the duration for which the alerts will be silenced from the Silence for list. You must have the permissions to edit alerts to see the Notifications toggle. Use the Options menu adjoining an alert rule to see the details of the alerting rule. Optional: Use the Events tab to see the events for your project. Figure 11.3. Monitoring events You can filter the displayed events using the following options: In the Resources list, select a resource to see events for that resource. In the All Types list, select a type of event to see events relevant to that type. Search for specific events using the Filter events by names or messages field. 11.3. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the alerts and metrics for your application. Critical and warning alerts for your application are indicated on the workload node in the Topology view. Procedure To see the alerts for your workload: In the Topology view, click the workload to see the workload details in the right panel. Click the Observe tab to see the critical and warning alerts for the application; graphs for metrics, such as CPU, memory, and bandwidth usage; and all the events for the application. Note Only critical and warning alerts in the Firing state are displayed in the Topology view. Alerts in the Silenced , Pending and Not Firing states are not displayed. Figure 11.4. Monitoring application metrics Click the alert listed in the right panel to see the alert details in the Alert Details page. Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. Click View monitoring dashboard to see the monitoring dashboard for that application. 11.4. Image vulnerabilities breakdown In the Developer perspective, the project dashboard shows the Image Vulnerabilities link in the Status section. Using this link, you can view the Image Vulnerabilities breakdown window, which includes details regarding vulnerable container images and fixable container images. The icon color indicates severity: Red: High priority. Fix immediately. Orange: Medium priority. Can be fixed after high-priority vulnerabilities. Yellow: Low priority. Can be fixed after high and medium-priority vulnerabilities. Based on the severity level, you can prioritize vulnerabilities and fix them in an organized manner. Figure 11.5. Viewing image vulnerabilities 11.5. Monitoring your application and image vulnerabilities metrics After you create applications in your project and deploy them, use the Developer perspective in the web console to see the metrics for your application dependency vulnerabilities across your cluster. The metrics help you to analyze the following image vulnerabilities in detail: Total count of vulnerable images in a selected project Severity-based counts of all vulnerable images in a selected project Drilldown into severity to obtain the details, such as count of vulnerabilities, count of fixable vulnerabilities, and number of affected pods for each vulnerable image Prerequisites You have installed the Red Hat Quay Container Security operator from the Operator Hub. Note The Red Hat Quay Container Security operator detects vulnerabilities by scanning the images that are in the quay registry. Procedure For a general overview of the image vulnerabilities, on the navigation panel of the Developer perspective, click Project to see the project dashboard. Click Image Vulnerabilities in the Status section. The window that opens displays details such as Vulnerable Container Images and Fixable Container Images . For a detailed vulnerabilities overview, click the Vulnerabilities tab on the project dashboard. To get more detail about an image, click its name. View the default graph with all types of vulnerabilities in the Details tab. Optional: Click the toggle button to view a specific type of vulnerability. For example, click App dependency to see vulnerabilities specific to application dependency. Optional: You can filter the list of vulnerabilities based on their Severity and Type or sort them by Severity , Package , Type , Source , Current Version , and Fixed in Version . Click a Vulnerability to get its associated details: Base image vulnerabilities display information from a Red Hat Security Advisory (RHSA). App dependency vulnerabilities display information from the Snyk security application. 11.6. Additional resources Monitoring overview | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/building_applications/odc-monitoring-project-and-application-metrics-using-developer-perspective |
7.6. Running Red Hat JBoss Data Grid | 7.6. Running Red Hat JBoss Data Grid JBoss Data Grid can be run in one of three ways: Use the following command to run JBoss Data Grid using the configuration defined in the standalone.xml file (located at USDJDG_HOME/standalone/configuration ): Use the following command with an appended -c followed by the configuration file name to run JBoss Data Grid with a non-default configuration file: Use the following command to run JBoss Data Grid with a default clustered configuration: Report a bug | [
"USDJDG_HOME/bin/standalone.sh",
"USDJDG_HOME/bin/standalone.sh -c clustered.xml",
"USDJDG_HOME/bin/clustered.sh"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/running_red_hat_jboss_data_grid |
Part VI. Known Issues | Part VI. Known Issues This part documents known problems in Red Hat Enterprise Linux 7.4. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/known-issues |
23.8. Memory Allocation | 23.8. Memory Allocation In cases where the guest virtual machine crashes, the optional attribute dumpCore can be used to control whether the guest virtual machine's memory should be included in the generated core dump( dumpCore='on' ) or not included ( dumpCore='off' ). Note that the default setting is on , so unless the parameter is set to off , the guest virtual machine memory will be included in the core dumpfile. The <maxMemory> element determines maximum run-time memory allocation of the guest. The slots attribute specifies the number of slots available for adding memory to the guest. The <memory> element specifies the maximum allocation of memory for the guest at boot time. This can also be set using the NUMA cell size configuration, and can be increased by hot-plugging of memory to the limit specified by maxMemory . The <currentMemory> element determines the actual memory allocation for a guest virtual machine. This value can be less than the maximum allocation (set by <memory> ) to allow for the guest virtual machine memory to balloon as needed. If omitted, this defaults to the same value as the <memory> element. The unit attribute behaves the same as for memory. <domain> <maxMemory slots='16' unit='KiB'>1524288</maxMemory> <memory unit='KiB' dumpCore='off'>524288</memory> <!-- changes the memory unit to KiB and does not allow the guest virtual machine's memory to be included in the generated core dumpfile --> <currentMemory unit='KiB'>524288</currentMemory> <!-- makes the current memory unit 524288 KiB --> ... </domain> Figure 23.10. Memory unit | [
"<domain> <maxMemory slots='16' unit='KiB'>1524288</maxMemory> <memory unit='KiB' dumpCore='off'>524288</memory> <!-- changes the memory unit to KiB and does not allow the guest virtual machine's memory to be included in the generated core dumpfile --> <currentMemory unit='KiB'>524288</currentMemory> <!-- makes the current memory unit 524288 KiB --> </domain>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-memory_allocation |
Chapter 7. Troubleshooting the Bare Metal Provisioning service | Chapter 7. Troubleshooting the Bare Metal Provisioning service Diagnose issues in an environment that includes the Bare Metal Provisioning service (ironic). 7.1. PXE boot errors Use the following troubleshooting procedures to assess and remedy issues you might encounter with PXE boot. Permission Denied errors If the console of your bare metal node returns a Permission Denied error, ensure that you have applied the appropriate SELinux context to the /httpboot and /tftpboot directories: Boot process freezes at /pxelinux.cfg/XX-XX-XX-XX-XX-XX On the console of your node, if it looks like you receive an IP address but then the process stops, you might be using the wrong PXE boot template in your ironic.conf file. The default template is pxe_config.template , so it is easy to omit the i and inadvertently enter ipxe_config.template instead. 7.2. Login errors after the bare metal node boots Failure to log in to the node when you use the root password that you set during configuration indicates that you are not booted into the deployed image. You might be logged in to the deploy-kernel/deploy-ramdisk image and the system has not yet loaded the correct image. To fix this issue, verify that the PXE Boot Configuration file in the /httpboot/pxelinux.cfg/MAC_ADDRESS on the Compute or Bare Metal Provisioning service node and ensure that all the IP addresses listed in this file correspond to IP addresses on the Bare Metal network. Note The only network that the Bare Metal Provisioning service node uses is the Bare Metal network. If one of the endpoints is not on the network, the endpoint cannot reach the Bare Metal Provisioning service node as a part of the boot process. For example, the kernel line in your file is as follows: Value in the above example kernel line Corresponding information http://192.168.200.2:8088 Parameter http_url in /etc/ironic/ironic.conf file. This IP address must be on the Bare Metal network. 5a6cdbe3-2c90-4a90-b3c6-85b449b30512 UUID of the baremetal node in ironic node-list . deploy_kernel This is the deploy kernel image in the Image service that is copied down as /httpboot/<NODE_UUID>/deploy_kernel . http://192.168.200.2:6385 Parameter api_url in /etc/ironic/ironic.conf file. This IP address must be on the Bare Metal network. ipmi The IPMI Driver in use by the Bare Metal Provisioning service for this node. deploy_ramdisk This is the deploy ramdisk image in the Image service that is copied down as /httpboot/<NODE_UUID>/deploy_ramdisk . If a value does not correspond between the /httpboot/pxelinux.cfg/MAC_ADDRESS and the ironic.conf file: Update the value in the ironic.conf file Restart the Bare Metal Provisioning service Re-deploy the Bare Metal instance 7.3. Boot-to-disk errors on deployed nodes With certain hardware, you might experience a problem with deployed nodes where the nodes cannot boot from disk during successive boot operations as part of a deployment. This usually happens because the BMC does not honor the persistent boot settings that director requests on the nodes. Instead, the nodes boot from a PXE target. In this case, you must update the boot order in the BIOS of the nodes. Set the HDD to be the first boot device, and then PXE as a later option, so that the nodes boot from disk by default, but can boot from the network during introspection or deployment as necessary. Note This error mostly applies to nodes that use LegacyBIOS firmware. 7.4. The Bare Metal Provisioning service does not receive the correct host name If the Bare Metal Provisioning service does not receive the right host name, it means that cloud-init is failing. To fix this, connect the Bare Metal subnet to a router in the OpenStack Networking service. This configuration routes requests to the meta-data agent correctly. 7.5. Invalid OpenStack Identity service credentials when executing Bare Metal Provisioning service commands If you cannot authenticate to the Identity service, check the identity_uri parameter in the ironic.conf file and ensure that you remove the /v2.0 from the keystone AdminURL. For example, set the identity_uri to http://IP:PORT . 7.6. Hardware enrolment Incorrect node registration details can cause issues with enrolled hardware. Ensure that you enter property names and values correctly. When you input property names incorrectly, the system adds the properties to the node details but ignores them. Use the openstack baremetal node set command to update node details. For example, update the amount of memory that the node is registered to use to 2 GB: 7.7. Troubleshooting iDRAC issues Redfish management interface fails to set boot device When you use the idrac-redfish management interface with certain iDRAC firmware versions and attempt to set the boot device on a bare metal server with UEFI boot, iDRAC returns the following error: If you encounter this issue, set the force_persistent_boot_device parameter in the driver-info on the node to Never : Timeout when powering off Some servers can be too slow when powering off, and time out. The default retry count is 6 , which results in a 30 second timeout. To increase the timeout duration to 90 seconds, set the ironic::agent::rpc_response_timeout value to 18 in the undercloud hieradata overrides file and re-run the openstack undercloud install command: Vendor passthrough timeout When iDRAC is not available to execute vendor passthrough commands, these commands take too long and time out: To increase the timeout duration for messaging, increase the value of the ironic::default::rpc_response_timeout parameter in the undercloud hieradata overrides file and re-run the openstack undercloud install command: 7.8. Configuring the server console Console output from overcloud nodes is not always sent to the server console. If you want to view this output in the server console, you must configure the overcloud to use the correct console for your hardware. Use one of the following methods to perform this configuration: Modify the KernelArgs heat parameter for each overcloud role. Customize the overcloud-hardened-uefi-full.qcow2 image that director uses to provision the overcloud nodes. Prerequisites A successful undercloud installation. For more information, see the Installing and managing Red Hat OpenStack Platform with director guide. Overcloud nodes ready for deployment. Modifying KernelArgs with heat during deployment Log in to the undercloud host as the stack user. Source the stackrc credentials file: Create an environment file overcloud-console.yaml with the following content: Replace <role> with the name of the overcloud role that you want to configure, and replace <console-name> with the ID of the console that you want to use. For example, use the following snippet to configure all overcloud nodes in the default roles to use tty0 : Include the overcloud-console-tty0.yaml file in your deployment command with the -e option. Modifying the overcloud-hardened-uefi-full.qcow2 image Log in to the undercloud host as the stack user. Source the stackrc credentials file: Modify the kernel arguments in the overcloud-hardened-uefi-full.qcow2 image to set the correct console for your hardware. For example, set the console to tty1 : Import the image into director: Deploy the overcloud. Verification Log in to an overcloud node from the undercloud: Replace <IP-address> with the IP address of an overcloud node. Inspect the contents of the /proc/cmdline file and ensure that console= parameter is set to the value of the console that you want to use: | [
"semanage fcontext -a -t httpd_sys_content_t \"/httpboot(/.*)?\" restorecon -r -v /httpboot semanage fcontext -a -t tftpdir_t \"/tftpboot(/.*)?\" restorecon -r -v /tftpboot",
"grep ^pxe_config_template ironic.conf pxe_config_template=USDpybasedir/drivers/modules/ipxe_config.template",
"kernel http://192.168.200.2:8088/5a6cdbe3-2c90-4a90-b3c6-85b449b30512/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn.2008-10.org.openstack:5a6cdbe3-2c90-4a90-b3c6-85b449b30512 deployment_id= 5a6cdbe3-2c90-4a90-b3c6-85b449b30512 deployment_key=VWDYDVVEFCQJNOSTO9R67HKUXUGP77CK ironic_api_url= http://192.168.200.2:6385 troubleshoot=0 text nofb nomodeset vga=normal boot_option=netboot ip=USD{ip}:USD{next-server}:USD{gateway}:USD{netmask} BOOTIF=USD{mac} ipa-api-url= http://192.168.200.2:6385 ipa-driver-name= ipmi boot_mode=bios initrd= deploy_ramdisk coreos.configdrive=0 || goto deploy",
"openstack baremetal node set --property memory_mb=2048 NODE_UUID",
"Unable to Process the request because the value entered for the parameter Continuous is not supported by the implementation.",
"openstack baremetal node set --driver-info force_persistent_boot_device=Never USD{node_uuid}",
"ironic::agent::rpc_response_timeout: 18",
"openstack baremetal node passthru call --http-method GET aed58dca-1b25-409a-a32f-3a817d59e1e0 list_unfinished_jobs Timed out waiting for a reply to message ID 547ce7995342418c99ef1ea4a0054572 (HTTP 500)",
"ironic::default::rpc_response_timeout: 600",
"source stackrc",
"parameter_defaults: <role>Parameters: KernelArgs: \"console=<console-name>\"",
"parameter_defaults: ControllerParameters: KernelArgs: \"console=tty0\" ComputeParameters: KernelArgs: \"console=tty0\" BlockStorageParameters: KernelArgs: \"console=tty0\" ObjectStorageParameters: KernelArgs: \"console=tty0\" CephStorageParameters: KernelArgs: \"console=tty0\"",
"source stackrc",
"virt-customize --selinux-relabel -a overcloud-hardened-uefi-full.qcow2 --run-command 'grubby --update-kernel=ALL --args=\"console=tty1\"'",
"openstack overcloud image upload --image-path overcloud-hardened-uefi-full.qcow2",
"ssh tripleo-admin@<IP-address>",
"[tripleo-admin@controller-0 ~]USD cat /proc/cmdline BOOT_IMAGE=(hd0,msdos2)/boot/vmlinuz-4.18.0-193.29.1.el8_2.x86_64 root=UUID=0ec3dea5-f293-4729-b676-5d38a611ce81 ro console=tty0 console=ttyS0,115200n8 no_timer_check crashkernel=auto rhgb quiet"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_bare_metal_provisioning_service/troubleshooting-the-bare-metal-provisioning-service |
8.2. Changed File Locations | 8.2. Changed File Locations In Red Hat Enterprise Linux 7, the following changes have been made to the location of the input method and font configuration files, and directories: The .xinputrc file has been moved from the user's home directory to the ~/.config/imsettings/ directory. The .imsettings.log file has been moved from the user's home directory and can now be found in ~/.cache/imsettings/log . The ~/.fonts.conf file has been deprecated. Users are encouraged to move the file to the ~/.config/fontconfig/ directory. The ~/.fonts.conf.d directory has been deprecated. Users are encouraged to move the directory to the ~/.config/fontconfig/ directory. All disabled fontconfig configuration files in the /etc/fonts/conf.avail/ directory have been moved to the /usr/share/fontconfig/conf.avail/ directory. If you have any local symbolic links pointing to the old location, remember to update them. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/input-methods-changed-file-locations |
Operators | Operators OpenShift Container Platform 4.15 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF",
"bundle.core.rukpak.io/combo-tag-ref created",
"oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'",
"Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable",
"tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml",
"apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.14",
"registry.redhat.io/redhat/redhat-operator-index:v4.15",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.28 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.28",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2",
"kind: Subscription spec: installPlanApproval: Manual 1",
"kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1",
"kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3",
"oc apply -f subscription.yaml",
"oc describe subscription <subscription_name> -n <namespace>",
"oc describe operatorgroup <operatorgroup_name> -n <namespace>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2",
"kind: Subscription spec: installPlanApproval: Manual 1",
"kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1",
"kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3",
"oc apply -f subscription.yaml",
"oc describe subscription <subscription_name> -n <namespace>",
"oc describe operatorgroup <operatorgroup_name> -n <namespace>",
"apiVersion: v1 kind: Namespace metadata: name: team1-operator",
"oc create -f team1-operator.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1",
"oc create -f team1-operatorgroup.yaml",
"apiVersion: v1 kind: Namespace metadata: name: global-operators",
"oc create -f global-operators.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators",
"oc create -f global-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc get csvs -n openshift",
"oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF",
"oc get events",
"LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15 1",
". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <catalog_dir>",
"echo USD?",
"0",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml",
"--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---",
"opm validate <catalog_dir>",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/<index_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.15 --tag mirror.example.com/abc/abc-redhat-operator-index:4.15.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.15",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.15 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.15 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.15 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.15",
"opm migrate <registry_image> <fbc_directory>",
"opm generate dockerfile <fbc_directory> --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15",
"opm index add --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15 --from-index <your_registry_image> --bundles \"\" -t \\<your_registry_image>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest",
"apiVersion: v1 kind: Namespace metadata: labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" 1 openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline 2 name: \"<namespace_name>\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" grpcPodConfig: securityContextConfig: <security_mode> 2 image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.15 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"oc patch operatorhub cluster -p '{\"spec\": {\"disableAllDefaultSources\": true}}' --type=merge",
"grpcPodConfig: nodeSelector: custom_label: <label>",
"grpcPodConfig: priorityClassName: <priority_class>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical",
"grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" name: cluster spec: featureSet: TechPreviewNoUpgrade 1",
"apiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperator",
"oc get platformoperator service-mesh-po -o yaml",
"status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: \"2022-10-24T17:24:40Z\" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: \"True\" 1 type: Installed",
"oc get clusteroperator platform-operators-aggregated -o yaml",
"status: conditions: - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"False\" type: Progressing - lastTransitionTime: \"2022-10-24T17:43:26Z\" status: \"False\" type: Degraded - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"True\" type: Available",
"apiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperator",
"oc apply -f service-mesh-po.yaml",
"error: resource mapping not found for name: \"service-mesh-po\" namespace: \"\" from \"service-mesh-po.yaml\": no matches for kind \"PlatformOperator\" in version \"platform.openshift.io/v1alpha1\" ensure CRDs are installed first",
"oc get platformoperator service-mesh-po -o yaml",
"status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: \"2022-10-24T17:24:40Z\" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: \"True\" 1 type: Installed",
"oc get clusteroperator platform-operators-aggregated -o yaml",
"status: conditions: - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"False\" type: Progressing - lastTransitionTime: \"2022-10-24T17:43:26Z\" status: \"False\" type: Degraded - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"True\" type: Available",
"oc get platformoperator",
"oc delete platformoperator quay-operator",
"platformoperator.platform.openshift.io \"quay-operator\" deleted",
"oc get ns quay-operator-system",
"Error from server (NotFound): namespaces \"quay-operator-system\" not found",
"oc get co platform-operators-aggregated",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE platform-operators-aggregated 4.15.0-0 True False False 70s",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.31.0-ocp\",",
"tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.31.0-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"FROM quay.io/operator-framework/ansible-operator:v1.31.0",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1",
"collections: - - name: community.kubernetes 1 - version: \"2.0.1\" - name: operator_sdk.util - version: \"0.4.0\" + version: \"0.5.0\" 2 - name: kubernetes.core version: \"2.4.0\" - name: cloud.common",
"--- dependency: name: galaxy driver: name: delegated - lint: | - set -e - yamllint -d \"{extends: relaxed, rules: {line-length: {max: 120}}}\" . platforms: - name: cluster groups: - k8s provisioner: name: ansible - lint: | - set -e ansible-lint inventory: group_vars: all: namespace: USD{TEST_OPERATOR_NAMESPACE:-osdk-test} host_vars: localhost: ansible_python_interpreter: '{{ ansible_playbook_python }}' config_dir: USD{MOLECULE_PROJECT_DIRECTORY}/config samples_dir: USD{MOLECULE_PROJECT_DIRECTORY}/config/samples operator_image: USD{OPERATOR_IMAGE:-\"\"} operator_pull_policy: USD{OPERATOR_PULL_POLICY:-\"Always\"} kustomize: USD{KUSTOMIZE_PATH:-kustomize} env: K8S_AUTH_KUBECONFIG: USD{KUBECONFIG:-\"~/.kube/config\"} verifier: name: ansible - lint: | - set -e - ansible-lint",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"FROM quay.io/operator-framework/helm-operator:v1.31.0 1",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <operator_name>-admin subjects: - kind: ServiceAccount name: <operator_name> namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\" rules: 1 - apiGroups: - \"\" resources: - secrets verbs: - watch",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"mkdir -p USDHOME/github.com/example/memcached-operator",
"cd USDHOME/github.com/example/memcached-operator",
"operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help",
"Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch",
"// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }",
"operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v3",
"Create Resource [y/n] y Create Controller [y/n] y",
"// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )",
"// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch",
"make install run",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc project <project_name>-system",
"apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m",
"apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2",
"oc apply -f config/samples/cache_v1_memcachedbackup.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"oc delete -f config/samples/cache_v1_memcachedbackup.yaml",
"make undeploy",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"",
"operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4",
"tree",
". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files",
"public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }",
"import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }",
"@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}",
"mvn clean install",
"cat target/kubernetes/memcacheds.cache.example.com-v1.yaml",
"Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1",
"<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>",
"package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }",
"Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();",
"if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }",
"int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();",
"if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }",
"List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());",
"if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }",
"private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }",
"private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }",
"mvn clean install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"oc apply -f rbac.yaml",
"java -jar target/quarkus-app/quarkus-run.jar",
"kubectl apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f rbac.yaml",
"oc get all -n default",
"NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s",
"oc apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply credentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token",
"oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1",
"aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-azure: \"true\"",
"// Get ENV var clientID := os.Getenv(\"CLIENTID\") tenantID := os.Getenv(\"TENANTID\") subscriptionID := os.Getenv(\"SUBSCRIPTIONID\") azureFederatedTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply credentialsRequest on install credReqTemplate.Spec.AzureProviderSpec.AzureClientID = clientID credReqTemplate.Spec.AzureProviderSpec.AzureTenantID = tenantID credReqTemplate.Spec.AzureProviderSpec.AzureRegion = \"centralus\" credReqTemplate.Spec.AzureProviderSpec.AzureSubscriptionID = subscriptionID credReqTemplate.CloudTokenPath = azureFederatedTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.31.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.31.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.31.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.31.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.31.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.31.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"operator-sdk bundle validate ./bundle --select-optional name=multiarch",
"INFO[0020] All validation tests have completed successfully",
"ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]",
"WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc create token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"docker manifest inspect <image_manifest> 1",
"{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 },",
"docker inspect <image>",
"FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1",
"PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx",
"make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux",
"Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh",
"oc -n [namespace] edit cm hw-event-proxy-operator-manager-config",
"apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> spec: packageName: <package_name> channel: <channel_name> version: <version_number>",
"oc get operator.operators.operatorframework.io",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.11.1 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: >1.11.1 1",
"oc apply -f <extension_name>.yaml",
"apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF",
"bundle.core.rukpak.io/combo-tag-ref created",
"oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'",
"Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable",
"tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml",
"apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h",
"oc apply -f <catalog_name>.yaml 1",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.15 pullSecret: <pull_secret_name> pollInterval: 24h",
"oc apply -f <catalog_name>.yaml 1",
"oc create secret generic <pull_secret_name> --from-file=.dockercfg=<file_path>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd",
"oc create secret generic redhat-cred --from-file=.dockercfg=/home/<username>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<file_path>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd",
"oc create secret generic redhat-cred --from-file=.dockerconfigjson=/home/<username>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<username> --docker-password=<password> --docker-email=<email> --namespace=openshift-catalogd",
"oc create secret docker-registry redhat-cred --docker-server=registry.redhat.io --docker-username=username --docker-password=password [email protected] --namespace=openshift-catalogd",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3",
"oc apply -f redhat-operators.yaml",
"catalog.catalogd.operatorframework.io/redhat-operators created",
"oc get catalog",
"NAME AGE redhat-operators 20s",
"oc describe catalog",
"Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: Catalog Metadata: Creation Timestamp: 2024-01-10T16:18:38Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 57057 UID: 128db204-49b3-45ee-bfea-a2e6fc8e34ea Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Type: image Status: 1 Conditions: Last Transition Time: 2024-01-10T16:18:55Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: http://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-01-10T16:18:51Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:7b536ae19b8e9f74bb521c4a61e5818e036ac1865a932f2157c6c9a766b2eea5 4 Type: image Events: <none>",
"oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:80",
"curl -L http://localhost:8080/catalogs/<catalog_name>/all.json -C - -o /<path>/<catalog_name>.json",
"curl -L http://localhost:8080/catalogs/redhat-operators/all.json -C - -o /home/username/catalogs/rhoc.json",
"jq -s '.[] | select(.schema == \"olm.package\") | .name' /<path>/<filename>.json",
"jq -s '.[] | select(.schema == \"olm.package\") | .name' /home/username/catalogs/rhoc.json",
"NAME AGE \"3scale-operator\" \"advanced-cluster-management\" \"amq-broker-rhel8\" \"amq-online\" \"amq-streams\" \"amq7-interconnect-operator\" \"ansible-automation-platform-operator\" \"ansible-cloud-addons-operator\" \"apicast-operator\" \"aws-efs-csi-driver-operator\" \"aws-load-balancer-operator\" \"bamoe-businessautomation-operator\" \"bamoe-kogito-operator\" \"bare-metal-event-relay\" \"businessautomation-operator\"",
"jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' /<path>/<catalog_name>.json",
"{\"package\":\"3scale-operator\",\"version\":\"0.10.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.10.5\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.1-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.2-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.3-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.5-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.6-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.7-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.8-mas\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-3\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-4\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-2\"}",
"jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"openshift-pipelines-operator-rh\")' /home/username/rhoc.json",
"{ \"defaultChannel\": \"stable\", \"icon\": { \"base64data\": \"PHN2ZyB4bWxu...\" \"mediatype\": \"image/png\" }, \"name\": \"openshift-pipelines-operator-rh\", \"schema\": \"olm.package\" }",
"jq -s '.[] | select( .schema == \"olm.package\") | .name' <catalog_name>.json",
"jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' <catalog_name>.json",
"jq -s '.[] | select( .package == \"<package_name>\")' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json",
"jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select ( .name == \"<channel>\") | select( .package == \"<package_name>\")' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.bundle\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.bundle\" ) | select ( .name == \"<bundle_name>\") | select( .package == \"<package_name>\")' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json",
"\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\"",
"jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json",
"\"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.13.1\"",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: <channel> version: \"<version>\"",
"oc apply -f pipeline-operator.yaml",
"operator.operators.operatorframework.io/pipelines-operator created",
"oc get operator.operators.operatorframework.io pipelines-operator -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"1.11.x\"}} creationTimestamp: \"2024-01-30T20:06:09Z\" generation: 1 name: pipelines-operator resourceVersion: \"44362\" uid: 4272d228-22e1-419e-b9a7-986f982ee588 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.x status: conditions: - lastTransitionTime: \"2024-01-30T20:06:15Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-01-30T20:06:31Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Installed installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280",
"oc get bundleDeployment pipelines-operator -o yaml",
"apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: creationTimestamp: \"2024-01-30T20:06:15Z\" generation: 2 name: pipelines-operator ownerReferences: - apiVersion: operators.operatorframework.io/v1alpha1 blockOwnerDeletion: true controller: true kind: Operator name: pipelines-operator uid: 4272d228-22e1-419e-b9a7-986f982ee588 resourceVersion: \"44464\" uid: 0a0c3525-27e2-4c93-bf57-55920a7707c0 spec: provisionerClassName: core-rukpak-io-plain template: metadata: {} spec: provisionerClassName: core-rukpak-io-registry source: image: ref: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280 type: image status: activeBundle: pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw conditions: - lastTransitionTime: \"2024-01-30T20:06:15Z\" message: Successfully unpacked the pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw Bundle reason: UnpackSuccessful status: \"True\" type: HasValidBundle - lastTransitionTime: \"2024-01-30T20:06:28Z\" message: Instantiated bundle pipelines-operator-29x720cjzx8yiowf13a3j75fil2zs3mfw successfully reason: InstallationSucceeded status: \"True\" type: Installed - lastTransitionTime: \"2024-01-30T20:06:40Z\" message: BundleDeployment is healthy reason: Healthy status: \"True\" type: Healthy observedGeneration: 2",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json",
"\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\"",
"jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json",
"\"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.13.1\"",
"oc get operator.operators.operatorframework.io <operator_name> -o yaml",
"oc get operator.operators.operatorframework.io pipelines-operator -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"1.11.1\"}} creationTimestamp: \"2024-02-06T17:47:15Z\" generation: 2 name: pipelines-operator resourceVersion: \"84528\" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest 1 packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.11.1 2 status: conditions: - lastTransitionTime: \"2024-02-06T17:47:21Z\" message: bundledeployment status is unknown observedGeneration: 2 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: \"2024-02-06T17:50:58Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 2 reason: Success status: \"True\" type: Resolved resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.12.1 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: \">1.11.1, <1.13\" 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: pipelines-1.13 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest version: \"<1.13\"",
"oc apply -f pipelines-operator.yaml",
"operator.operators.operatorframework.io/pipelines-operator configured",
"oc patch operator.operators.operatorframework.io/pipelines-operator -p '{\"spec\":{\"version\":\"1.12.1\"}}' --type=merge",
"operator.operators.operatorframework.io/pipelines-operator patched",
"oc get operator.operators.operatorframework.io pipelines-operator -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"1.12.1\"}} creationTimestamp: \"2024-02-06T19:16:12Z\" generation: 4 name: pipelines-operator resourceVersion: \"58122\" uid: 886bbf73-604f-4484-9f87-af6ce0f86914 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 1.12.1 1 status: conditions: - lastTransitionTime: \"2024-02-06T19:30:57Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a\" observedGeneration: 3 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-02-06T19:30:57Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a\" observedGeneration: 3 reason: Success status: \"True\" type: Resolved installedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a resolvedBundleResource: registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2f1b8ef0fd741d1d686489475423dabc07c55633a4dfebc45e1d533183179f6a",
"oc get operator.operators.operatorframework.io <operator_name> -o yaml",
"get operator.operators.operatorframework.io pipelines-operator -o yaml apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"packageName\":\"openshift-pipelines-operator-rh\",\"version\":\"2.0.0\"}} creationTimestamp: \"2024-02-06T17:47:15Z\" generation: 1 name: pipelines-operator resourceVersion: \"82667\" uid: dffe2c89-b9c4-427e-b694-ada0b37fc0a9 spec: channel: latest packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: 2.0.0 status: conditions: - lastTransitionTime: \"2024-02-06T17:47:21Z\" message: installation has not been attempted due to failure to gather data for resolution observedGeneration: 1 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: \"2024-02-06T17:47:21Z\" message: no package \"openshift-pipelines-operator-rh\" matching version \"2.0.0\" found in channel \"latest\" observedGeneration: 1 reason: ResolutionFailed status: \"False\" type: Resolved",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: \">=1.11, <1.13\"",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh channel: latest 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: 1.11.1 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh version: >1.11.1 1",
"oc apply -f <extension_name>.yaml",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 version: <version> 3 upgradeConstraintPolicy: Ignore 4",
"oc apply -f <extension_name>.yaml",
"oc delete operator.operators.operatorframework.io <operator_name>",
"operator.operators.operatorframework.io \"<operator_name>\" deleted",
"oc get operator.operators.operatorframework.io",
"No resources found",
"oc get ns <operator_name>-system",
"Error from server (NotFound): namespaces \"<operator_name>-system\" not found",
"oc delete catalog <catalog_name>",
"catalog.catalogd.operatorframework.io \"my-catalog\" deleted",
"oc get catalog",
"manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml",
"FROM scratch 1 ADD manifests /manifests",
"podman build -f plainbundle.Dockerfile -t quay.io/<organization_name>/<repository_name>:<image_tag> . 1",
"podman push quay.io/<organization_name>/<repository_name>:<image_tag>",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.15 1",
". ├── <catalog_dir> └── <catalog_dir>.Dockerfile",
"opm init <extension_name> --output json > <catalog_dir>/index.json",
"{ { \"schema\": \"olm.package\", \"name\": \"<extension_name>\", \"defaultChannel\": \"\" } }",
"{ \"schema\": \"olm.bundle\", \"name\": \"<extension_name>.v<version>\", \"package\": \"<extension_name>\", \"image\": \"quay.io/<organization_name>/<repository_name>:<image_tag>\", \"properties\": [ { \"type\": \"olm.package\", \"value\": { \"packageName\": \"<extension_name>\", \"version\": \"<bundle_version>\" } }, { \"type\": \"olm.bundle.mediatype\", \"value\": \"plain+v0\" } ] }",
"{ \"schema\": \"olm.channel\", \"name\": \"<desired_channel_name>\", \"package\": \"<extension_name>\", \"entries\": [ { \"name\": \"<extension_name>.v<version>\" } ] }",
"{ \"schema\": \"olm.package\", \"name\": \"example-extension\", \"defaultChannel\": \"preview\" } { \"schema\": \"olm.bundle\", \"name\": \"example-extension.v0.0.1\", \"package\": \"example-extension\", \"image\": \"quay.io/example-org/example-extension-bundle:v0.0.1\", \"properties\": [ { \"type\": \"olm.package\", \"value\": { \"packageName\": \"example-extension\", \"version\": \"0.0.1\" } }, { \"type\": \"olm.bundle.mediatype\", \"value\": \"plain+v0\" } ] } { \"schema\": \"olm.channel\", \"name\": \"preview\", \"package\": \"example-extension\", \"entries\": [ { \"name\": \"example-extension.v0.0.1\" } ] }",
"opm validate <catalog_dir>",
"podman build -f <catalog_dir>.Dockerfile -t quay.io/<organization_name>/<repository_name>:<image_tag> .",
"podman push quay.io/<organization_name>/<repository_name>:<image_tag>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/operators/index |
Chapter 13. Enabling GPU support in OpenShift AI | Chapter 13. Enabling GPU support in OpenShift AI Optionally, to ensure that your data scientists can use compute-heavy workloads in their models, you can enable graphics processing units (GPUs) in OpenShift AI. Important The NVIDIA GPU Add-on is no longer supported. Instead, enable GPUs by installing the NVIDIA GPU Operator. If your deployment has a previously-installed NVIDIA GPU Add-on, before you install the NVIDIA GPU Operator, use Red Hat OpenShift Cluster Manager to uninstall the NVIDIA GPU Add-on from your cluster. Prerequisites You have logged in to your OpenShift cluster. You have the cluster-admin role in your OpenShift cluster. Procedure To enable GPU support on an OpenShift cluster, follow the instructions here: NVIDIA GPU Operator on Red Hat OpenShift Container Platform in the NVIDIA documentation. Delete the migration-gpu-status ConfigMap. In the OpenShift web console, switch to the Administrator perspective. Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate ConfigMap. Search for the migration-gpu-status ConfigMap. Click the action menu (...) and select Delete ConfigMap from the list. The Delete ConfigMap dialog appears. Inspect the dialog and confirm that you are deleting the correct ConfigMap. Click Delete . Restart the dashboard replicaset. In the OpenShift web console, switch to the Administrator perspective. Click Workloads Deployments . Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate deployment. Search for the rhods-dashboard deployment. Click the action menu (...) and select Restart Rollout from the list. Wait until the Status column indicates that all pods in the rollout have fully restarted. Verification The NVIDIA GPU Operator appears on the Operators Installed Operators page in the OpenShift web console. The reset migration-gpu-status instance is present in the Instances tab on the AcceleratorProfile custom resource definition (CRD) details page. After installing the NVIDIA GPU Operator, create an accelerator profile as described in Working with accelerator profiles . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_the_openshift_ai_cloud_service/enabling-gpu-support_install |
Chapter 5. Deploy standalone Multicloud Object Gateway | Chapter 5. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. You can deploy the Multicloud Object Gateway component either using dynamic storage devices or using the local storage devices. 5.1. Deploy standalone Multicloud Object Gateway using dynamic storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.1.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.1.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) 5.2. Deploy standalone Multicloud Object Gateway using local storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.2.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"oc annotate namespace openshift-storage openshift.io/node-selector="
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-standalone-multicloud-object-gateway |
probe::nfs.proc.write_done | probe::nfs.proc.write_done Name probe::nfs.proc.write_done - NFS client response to a write RPC task Synopsis nfs.proc.write_done Values server_ip IP address of server status result of last operation version NFS version count number of bytes written prot transfer protocol valid fattr->valid, indicates which fields are valid timestamp V4 timestamp, which is used for lease renewal Description Fires when a reply to a write RPC task is received or some write error occurs (timeout or socket shutdown). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-proc-write-done |
Chapter 13. Installing a three-node cluster on GCP | Chapter 13. Installing a three-node cluster on GCP In OpenShift Container Platform version 4.15, you can install a three-node cluster on Google Cloud Platform (GCP). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 13.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 13.2. steps Installing a cluster on GCP with customizations Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/installing-gcp-three-node |
Chapter 6. Migrating application workloads | Chapter 6. Migrating application workloads You can migrate application workloads from the internal mode storage classes to the external mode storage classes using Migration Toolkit for Containers using the same cluster as source and target. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_multiple_openshift_data_foundation_storage_clusters/proc_migrating-application-workloads_rhodf |
Chapter 9. Migrating your applications | Chapter 9. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or the command line . Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. During migration, the MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 9.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 9.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 9.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 9.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an OpenShift Container Platform cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 9.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 9.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources for persistent volume copy methods MTC file system copy method MTC snapshot copy method 9.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned. | [
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc create token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"az group list",
"{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migration_toolkit_for_containers/migrating-applications-with-mtc |
Chapter 6. Storage classes and storage pools | Chapter 6. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 6.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 6.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager Procedure In the OpenShift Web Console, navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. There are two options available to set the KMS connection details: Select existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select the Key Management Service Provider . If Vault is selected as the Key Management Service Provider , follow these steps: Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . If Thales CipherTrust Manager (using KMIP) is selected as the Key Management Service Provider , follow these steps: Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . | [
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/storage-classes-and-storage-pools_osp |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/proc_providing-feedback-on-red-hat-documentation |
Chapter 12. Configuring logging for Kafka components | Chapter 12. Configuring logging for Kafka components Configure the logging levels of Kafka components directly in the configuration properties. You can also change the broker levels dynamically for Kafka brokers, Kafka Connect, and MirrorMaker 2. Increasing the log level detail, such as from INFO to DEBUG, can aid in troubleshooting a Kafka cluster. However, more verbose logs may also negatively impact performance and make it more difficult to diagnose issues. 12.1. Configuring Kafka logging properties Kafka components use the Log4j framework for error logging. By default, logging configuration is read from the classpath or config directory using the following properties files: log4j.properties for Kafka connect-log4j.properties for Kafka Connect and MirrorMaker 2 If they are not set explicitly, loggers inherit the log4j.rootLogger logging level configuration in each file. You can change the logging level in these files. You can also add and set logging levels for other loggers. You can change the location and name of logging properties file using the KAFKA_LOG4J_OPTS environment variable, which is used by the start script for the component. Passing the name and location of the logging properties file used by Kafka nodes export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/my/path/to/log4j.properties"; \ ./bin/kafka-server-start.sh \ ./config/kraft/server.properties Passing the name and location of the logging properties file used by Kafka Connect export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties"; \ ./bin/connect-distributed.sh \ ./config/connect-distributed.properties Passing the name and location of the logging properties file used by MirrorMaker 2 export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties"; \ ./bin/connect-mirror-maker.sh \ ./config/connect-mirror-maker.properties 12.2. Dynamically change logging levels for Kafka broker loggers Kafka broker logging is provided by broker loggers in each broker. Dynamically change the logging level for broker loggers at runtime without having to restart the broker. You can also reset broker loggers dynamically to their default logging levels. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Kafka is running . Procedure List all the broker loggers for a broker by using the kafka-configs.sh tool: ./bin/kafka-configs.sh --bootstrap-server <broker_address> --describe --entity-type broker-loggers --entity-name <broker_id> For example, for broker 0 : ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0 This returns the logging level for each logger: TRACE , DEBUG , INFO , WARN , ERROR , or FATAL . For example: #... kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={} Change the logging level for one or more broker loggers. Use the --alter and --add-config options and specify each logger and its level as a comma-separated list in double quotes. ./bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --add-config " LOGGER-ONE=NEW-LEVEL , LOGGER-TWO=NEW-LEVEL " --entity-type broker-loggers --entity-name <broker_id> For example, for broker 0 : ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config "kafka.controller.ControllerChannelManager=WARN,kafka.log.TimeIndex=WARN" --entity-type broker-loggers --entity-name 0 If successful this returns: Completed updating config for broker: 0. Resetting a broker logger You can reset one or more broker loggers to their default logging levels by using the kafka-configs.sh tool. Use the --alter and --delete-config options and specify each broker logger as a comma-separated list in double quotes: ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config " LOGGER-ONE , LOGGER-TWO " --entity-type broker-loggers --entity-name <broker_id> Additional resources Updating Broker Configs in the Apache Kafka documentation 12.3. Dynamically change logging levels for Kafka Connect and MirrorMaker 2 Dynamically change logging levels for Kafka Connect workers or MirrorMaker 2 connectors at runtime without having to restart. Use the Kafka Connect API to change the log level temporarily for a worker or connector logger. The Kafka Connect API provides an admin/loggers endpoint to get or modify logging levels. When you change the log level using the API, the logger configuration in the connect-log4j.properties configuration file does not change. If required, you can permanently change the logging levels in the configuration file. Note You can only change the logging level of MirrorMaker 2 at runtime when in distributed or standalone mode. Dedicated MirrorMaker 2 clusters have no Kafka Connect REST API, so changing the logging level is not possible. The default listener for the Kafka Connect API is on port 8083, which is used in this procedure. You can change or add more listeners, and also enable TLS authentication, using admin.listeners configuration. Example listener configuration for the admin endpoint admin.listeners=https://localhost:8083 admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks admin.listeners.https.ssl.truststore.password=123456 admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks admin.listeners.https.ssl.keystore.password=123456 If you do not want the admin endpoint to be available, you can disable it in the configuration by specifying an empty string. Example listener configuration to disable the admin endpoint admin.listeners= Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Kafka is running . Kafka Connect or MirrorMaker 2 is running. Procedure Check the current logging level for the loggers configured in the connect-log4j.properties file: USD cat ./config/connect-log4j.properties # ... log4j.rootLogger=INFO, stdout, connectAppender # ... log4j.logger.org.reflections=ERROR Use a curl command to check the logging levels from the admin/loggers endpoint of the Kafka Connect API: curl -s http://localhost:8083/admin/loggers/ | jq { "org.reflections": { "level": "ERROR" }, "root": { "level": "INFO" } } jq prints the output in JSON format. The list shows standard org and root level loggers, plus any specific loggers with modified logging levels. If you configure TLS (Transport Layer Security) authentication for the admin.listeners configuration in Kafka Connect, then the address of the loggers endpoint is the value specified for admin.listeners with the protocol as https, such as https://localhost:8083 . You can also get the log level of a specific logger: curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.mirror.MirrorCheckpointConnector | jq { "level": "INFO" } Use a PUT method to change the log level for a logger: curl -Ss -X PUT -H 'Content-Type: application/json' -d '{"level": "TRACE"}' http://localhost:8083/admin/loggers/root { # ... "org.reflections": { "level": "TRACE" }, "org.reflections.Reflections": { "level": "TRACE" }, "root": { "level": "TRACE" } } If you change the root logger, the logging level for loggers that used the root logging level by default are also changed. | [
"export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/log4j.properties\"; ./bin/kafka-server-start.sh ./config/kraft/server.properties",
"export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties\"; ./bin/connect-distributed.sh ./config/connect-distributed.properties",
"export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties\"; ./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties",
"./bin/kafka-configs.sh --bootstrap-server <broker_address> --describe --entity-type broker-loggers --entity-name <broker_id>",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0",
"# kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={}",
"./bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --add-config \" LOGGER-ONE=NEW-LEVEL , LOGGER-TWO=NEW-LEVEL \" --entity-type broker-loggers --entity-name <broker_id>",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config \"kafka.controller.ControllerChannelManager=WARN,kafka.log.TimeIndex=WARN\" --entity-type broker-loggers --entity-name 0",
"Completed updating config for broker: 0.",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config \" LOGGER-ONE , LOGGER-TWO \" --entity-type broker-loggers --entity-name <broker_id>",
"admin.listeners=https://localhost:8083 admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks admin.listeners.https.ssl.truststore.password=123456 admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks admin.listeners.https.ssl.keystore.password=123456",
"admin.listeners=",
"cat ./config/connect-log4j.properties log4j.rootLogger=INFO, stdout, connectAppender log4j.logger.org.reflections=ERROR",
"curl -s http://localhost:8083/admin/loggers/ | jq { \"org.reflections\": { \"level\": \"ERROR\" }, \"root\": { \"level\": \"INFO\" } }",
"curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.mirror.MirrorCheckpointConnector | jq { \"level\": \"INFO\" }",
"curl -Ss -X PUT -H 'Content-Type: application/json' -d '{\"level\": \"TRACE\"}' http://localhost:8083/admin/loggers/root { # \"org.reflections\": { \"level\": \"TRACE\" }, \"org.reflections.Reflections\": { \"level\": \"TRACE\" }, \"root\": { \"level\": \"TRACE\" } }"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-kafka-logging-str |
Installing on vSphere | Installing on vSphere OpenShift Container Platform 4.16 Installing OpenShift Container Platform on vSphere Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_vsphere/index |
21.2.2.2. The /etc/selinux/ Directory | 21.2.2.2. The /etc/selinux/ Directory The /etc/selinux/ directory is the primary location for all policy files as well as the main configuration file. The following example shows sample contents of the /etc/selinux/ directory: The two subdirectories, strict/ and targeted/ , are the specific directories where the policy files of the same name (i.e., strict and targeted) are contained. For more information on SELinux policy and policy configuration, refer to the rhel-selg. | [
"-rw-r--r-- 1 root root 448 Sep 22 17:34 config drwxr-xr-x 5 root root 4096 Sep 22 17:27 strict drwxr-xr-x 5 root root 4096 Sep 22 17:28 targeted"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-selinux-files-etc-selinux |
Chapter 40. Configuring and running standalone Business Central | Chapter 40. Configuring and running standalone Business Central You can use the Business Central standalone JAR file to run Business Central without deploying it to an application server. You can use sample configuration files to run the Business Central standalone JAR file out of the box or you can customize the sampfiles for your requirements. Note This JAR file is supported only when it is run on Red Hat Enterprise Linux. Prerequisites The Red Hat Process Automation Manager 7.13.5 Business Central Standalone ( rhpam-7.13.5-business-central-standalone.jar ) and the Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) files have been downloaded from the Software Downloads page for Red Hat Process Automation Manager 7.13, as described in Chapter 34, Downloading the Red Hat Process Automation Manager installation files . Procedure Extract the downloaded rhpam-7.13.5-addons.zip to a temporary directory. This archive includes the rhpam-7.13.5-standalone-sample-configuration.zip file. Extract the rhpam-7.13.5-standalone-sample-configuration.zip file to the directory that contains the rhpam-7.13.5-business-central-standalone.jar file. The rhpam-7.13.5-standalone-sample-configuration.zip file contains the following sample configuration files: application-script.cli : Sample script for adding a user and kie server system properties kie-fs-realm-users : Sample user data You can run the rhpam-7.13.5-business-central-standalone.jar files with the sample data provided in the configuration files or you can customize the data for your requirements. To customize the configuration data, complete the following steps: Edit the application-script.cli file to include an administrative user with admin , user , rest-all , rest-client and kie-server roles. In the following example, replace <USERNAME> and <PASSWORD> with your username and password of the user you want to create. To run the Business Central standalone JAR file, enter the following command: To set application properties when you run the JAR file, include the -D<PROPERTY>=<VALUE> parameter in the command, where <PROPERTY> is the name of a supported application property and <VALUE> is the property value: For example, to run Business Central and connect to KIE Server as the user controllerUser , enter: java -jar rhpam-7.13.5-business-central-standalone.jar \ --cli-script=application-script.cli \ -Dorg.kie.server.user=controllerUser \ -Dorg.kie.server.pwd=controllerUser1234 Doing this enables you to deploy containers to KIE Server. See Appendix A, Business Central system properties for more information. Note To enable user and group management in Business Central, set the value of the org.uberfire.ext.security.management.wildfly.cli.folderPath property to kie-fs-realm-users . | [
"/subsystem=elytron/filesystem-realm=KieRealm:add-identity(identity=<USERNAME>) /subsystem=elytron/filesystem-realm=KieRealm:set-password(identity=<USERNAME>, clear={password=\"<PASSWORD>\"}) /subsystem=elytron/filesystem-realm=KieRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=[\"admin\",\"user\",\"rest-all\",\"rest-client\",\"kie-server\"])",
"java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli",
"java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli -D<PROPERTY>=<VALUE> -D<PROPERTY>=<VALUE>",
"java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli -Dorg.kie.server.user=controllerUser -Dorg.kie.server.pwd=controllerUser1234"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/run-dc-standalone-proc_install-on-jws |
1.2. Security Controls | 1.2. Security Controls Computer security is often divided into three distinct master categories, commonly referred to as controls : Physical Technical Administrative These three broad categories define the main objectives of proper security implementation. Within these controls are sub-categories that further detail the controls and how to implement them. 1.2.1. Physical Controls Physical control is the implementation of security measures in a defined structure used to deter or prevent unauthorized access to sensitive material. Examples of physical controls are: Closed-circuit surveillance cameras Motion or thermal alarm systems Security guards Picture IDs Locked and dead-bolted steel doors Biometrics (includes fingerprint, voice, face, iris, handwriting, and other automated methods used to recognize individuals) 1.2.2. Technical Controls Technical controls use technology as a basis for controlling the access and usage of sensitive data throughout a physical structure and over a network. Technical controls are far-reaching in scope and encompass such technologies as: Encryption Smart cards Network authentication Access control lists (ACLs) File integrity auditing software 1.2.3. Administrative Controls Administrative controls define the human factors of security. They involve all levels of personnel within an organization and determine which users have access to what resources and information by such means as: Training and awareness Disaster preparedness and recovery plans Personnel recruitment and separation strategies Personnel registration and accounting | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Security_Controls |
Chapter 265. Protobuf DataFormat | Chapter 265. Protobuf DataFormat Available as of Camel version 2.2.0 | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/protobuf-dataformat |
Chapter 8. Message delivery | Chapter 8. Message delivery 8.1. Handling unacknowledged deliveries Messaging systems use message acknowledgment to track if the goal of sending a message is truly accomplished. When a message is sent, there is a period of time after the message is sent and before it is acknowledged (the message is "in flight"). If the network connection is lost during that time, the status of the message delivery is unknown, and the delivery might require special handling in application code to ensure its completion. The sections below describe the conditions for message delivery when connections fail. Non-transacted producer with an unacknowledged delivery If a message is in flight, it is sent again after reconnect, provided a send timeout is not set and has not elapsed. No user action is required. Transacted producer with an uncommitted transaction If a message is in flight, it is sent again after reconnect. If the send is the first in a new transaction, then sending continues as normal after reconnect. If there are sends in the transaction, then the transaction is considered failed, and any subsequent commit operation throws a TransactionRolledBackException . To ensure delivery, the user must resend any messages belonging to a failed transaction. Transacted producer with a pending commit If a commit is in flight, then the transaction is considered failed, and any subsequent commit operation throws a TransactionRolledBackException . To ensure delivery, the user must resend any messages belonging to a failed transaction. Non-transacted consumer with an unacknowledged delivery If a message is received but not yet acknowledged, then acknowledging the message produces no error but results in no action by the client. Because the received message is not acknowledged, the producer might resend it. To avoid duplicates, the user must filter out duplicate messages by message ID. Transacted consumer with an uncommitted transaction If an active transaction is not yet committed, it is considered failed, and any pending acknowledgments are dropped. Any subsequent commit operation throws a TransactionRolledBackException . The producer might resend the messages belonging to the transaction. To avoid duplicates, the user must filter out duplicate messages by message ID. Transacted consumer with a pending commit If a commit is in flight, then the transaction is considered failed. Any subsequent commit operation throws a TransactionRolledBackException . The producer might resend the messages belonging to the transaction. To avoid duplicates, the user must filter out duplicate messages by message ID. 8.2. Extended session acknowledgment modes The client supports two additional session acknowledgement modes beyond those defined in the JMS specification. Individual acknowledge In this mode, messages must be acknowledged individually by the application using the Message.acknowledge() method used when the session is in CLIENT_ACKNOWLEDGE mode. Unlike with CLIENT_ACKNOWLEDGE mode, only the target message is acknowledged. All other delivered messages remain unacknowledged. The integer value used to activate this mode is 101. connection.createSession(false, 101); No acknowledge In this mode, messages are accepted at the server before being dispatched to the client, and no acknowledgment is performed by the client. The client supports two integer values to activate this mode, 100 and 257. connection.createSession(false, 100); | [
"connection.createSession(false, 101);",
"connection.createSession(false, 100);"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_client/message_delivery |
Chapter 11. Prometheus metrics monitoring in Red Hat Decision Manager | Chapter 11. Prometheus metrics monitoring in Red Hat Decision Manager Prometheus is an open-source systems monitoring toolkit that you can use with Red Hat Decision Manager to collect and store metrics related to the execution of business rules, processes, Decision Model and Notation (DMN) models, and other Red Hat Decision Manager assets. You can access the stored metrics through a REST API call to the KIE Server, through the Prometheus expression browser, or using a data-graphing tool such as Grafana. You can configure Prometheus metrics monitoring for an on-premise KIE Server instance, for KIE Server on Spring Boot, or for a KIE Server deployment on Red Hat OpenShift Container Platform. For the list of available metrics that KIE Server exposes with Prometheus, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-services/kie-server-services-prometheus/src/main/java/org/kie/server/services/prometheus . Important Red Hat support for Prometheus is limited to the setup and configuration recommendations provided in Red Hat product documentation. 11.1. Configuring Prometheus metrics monitoring for KIE Server You can configure your KIE Server instances to use Prometheus to collect and store metrics related to your business asset activity in Red Hat Decision Manager. For the list of available metrics that KIE Server exposes with Prometheus, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-services/kie-server-services-prometheus/src/main/java/org/kie/server/services/prometheus . Prerequisites KIE Server is installed. You have kie-server user role access to KIE Server. Prometheus is installed. For information about downloading and using Prometheus, see the Prometheus documentation page . Procedure In your KIE Server instance, set the org.kie.prometheus.server.ext.disabled system property to false to enable the Prometheus extension. You can define this property when you start KIE Server or in the standalone.xml or standalone-full.xml file of Red Hat Decision Manager distribution. If you are running Red Hat Decision Manager on Spring Boot, configure the required key in the application.properties system property: Spring Boot application.properties key for Red Hat Decision Manager and Prometheus kieserver.drools.enabled=true kieserver.dmn.enabled=true kieserver.prometheus.enabled=true In the prometheus.yaml file of your Prometheus distribution, add the following settings in the scrape_configs section to configure Prometheus to scrape metrics from KIE Server: Scrape configurations in prometheus.yaml file scrape_configs: - job_name: 'kie-server' metrics_path: /SERVER_PATH/services/rest/metrics basicAuth: username: USER_NAME password: PASSWORD static_configs: - targets: ["HOST:PORT"] Scrape configurations in prometheus.yaml file for Spring Boot (if applicable) scrape_configs: - job_name: 'kie' metrics_path: /rest/metrics static_configs: - targets: ["HOST:PORT"] Replace the values according to your KIE Server location and settings. Start the KIE Server instance. Example start command for Red Hat Decision Manager on Red Hat JBoss EAP After you start the configured KIE Server instance, Prometheus begins collecting metrics and KIE Server publishes the metrics to the REST API endpoint http://HOST:PORT/SERVER/services/rest/metrics (or on Spring Boot, to http://HOST:PORT/rest/metrics ). In a REST client or curl utility, send a REST API request with the following components to verify that KIE Server is publishing the metrics: For REST client: Authentication : Enter the user name and password of the KIE Server user with the kie-server role. HTTP Headers : Set the following header: Accept : application/json HTTP method : Set to GET . URL : Enter the KIE Server REST API base URL and metrics endpoint, such as http://localhost:8080/kie-server/services/rest/metrics (or on Spring Boot, http://localhost:8080/rest/metrics ). For curl utility: -u : Enter the user name and password of the KIE Server user with the kie-server role. -H : Set the following header: accept : application/json -X : Set to GET . URL : Enter the KIE Server REST API base URL and metrics endpoint, such as http://localhost:8080/kie-server/services/rest/metrics (or on Spring Boot, http://localhost:8080/rest/metrics ). Example curl command for Red Hat Decision Manager on Red Hat JBoss EAP Example curl command for Red Hat Decision Manager on Spring Boot Example server response If the metrics are not available in KIE Server, review and verify the KIE Server and Prometheus configurations described in this section. You can also interact with your collected metrics in the Prometheus expression browser at http://HOST:PORT/graph , or integrate your Prometheus data source with a data-graphing tool such as Grafana: Figure 11.1. Prometheus expression browser with KIE Server metrics Figure 11.2. Prometheus expression browser with KIE Server target Figure 11.3. Grafana dashboard with KIE Server metrics for DMN models Figure 11.4. Grafana dashboard with KIE Server metrics for solvers Additional resources Getting Started with Prometheus Grafana Support for Prometheus Using Prometheus in Grafana 11.2. Configuring Prometheus metrics monitoring for KIE Server on Red Hat OpenShift Container Platform You can configure your KIE Server deployment on Red Hat OpenShift Container Platform to use Prometheus to collect and store metrics related to your business asset activity in Red Hat Decision Manager. For the list of available metrics that KIE Server exposes with Prometheus, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-services/kie-server-services-prometheus/src/main/java/org/kie/server/services/prometheus . Prerequisites KIE Server is installed and deployed on Red Hat OpenShift Container Platform. For more information about KIE Server on OpenShift, see the relevant OpenShift deployment option in the Product documentation for Red Hat Decision Manager 7.13 . You have kie-server user role access to KIE Server. Prometheus Operator is installed. For information about downloading and using Prometheus Operator, see the Prometheus Operator project in GitHub. Procedure In the DeploymentConfig object of your KIE Server deployment on OpenShift, set the PROMETHEUS_SERVER_EXT_DISABLED environment variable to false to enable the Prometheus extension. You can set this variable in the OpenShift web console or use the oc command in a command terminal: If you have not yet deployed your KIE Server on OpenShift, then in the OpenShift template that you plan to use for your OpenShift deployment (for example, rhpam713-prod-immutable-kieserver.yaml ), you can set the PROMETHEUS_SERVER_EXT_DISABLED template parameter to false to enable the Prometheus extension. If you are using the OpenShift Operator to deploy KIE Server on OpenShift, then in your KIE Server configuration, set the PROMETHEUS_SERVER_EXT_DISABLED environment variable to false to enable the Prometheus extension: apiVersion: app.kiegroup.org/v1 kind: KieApp metadata: name: enable-prometheus spec: environment: rhpam-trial objects: servers: - env: - name: PROMETHEUS_SERVER_EXT_DISABLED value: "false" Create a service-metrics.yaml file to add a service that exposes the metrics from KIE Server to Prometheus: apiVersion: v1 kind: Service metadata: annotations: description: RHPAM Prometheus metrics exposed labels: app: myapp-kieserver application: myapp-kieserver template: myapp-kieserver metrics: rhpam name: rhpam-app-metrics spec: ports: - name: web port: 8080 protocol: TCP targetPort: 8080 selector: deploymentConfig: myapp-kieserver sessionAffinity: None type: ClusterIP In a command terminal, use the oc command to apply the service-metrics.yaml file to your OpenShift deployment: oc apply -f service-metrics.yaml Create an OpenShift secret, such as metrics-secret , to access the Prometheus metrics on KIE Server. The secret must contain the "username" and "password" elements with KIE Server user credentials. For information about OpenShift secrets, see the Secrets chapter in the OpenShift Developer Guide . Create a service-monitor.yaml file that defines the ServiceMonitor object. A service monitor enables Prometheus to connect to the KIE Server metrics service. apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: rhpam-service-monitor labels: team: frontend spec: selector: matchLabels: metrics: rhpam endpoints: - port: web path: /services/rest/metrics basicAuth: password: name: metrics-secret key: password username: name: metrics-secret key: username In a command terminal, use the oc command to apply the service-monitor.yaml file to your OpenShift deployment: oc apply -f service-monitor.yaml After you complete these configurations, Prometheus begins collecting metrics and KIE Server publishes the metrics to the REST API endpoint http://HOST:PORT/kie-server/services/rest/metrics . You can interact with your collected metrics in the Prometheus expression browser at http://HOST:PORT/graph , or integrate your Prometheus data source with a data-graphing tool such as Grafana. The host and port for the Prometheus expression browser location http://HOST:PORT/graph was defined in the route where you exposed the Prometheus web console when you installed the Prometheus Operator. For information about OpenShift routes, see the Routes chapter in the OpenShift Architecture documentation. Figure 11.5. Prometheus expression browser with KIE Server metrics Figure 11.6. Prometheus expression browser with KIE Server target Figure 11.7. Grafana dashboard with KIE Server metrics for DMN models Figure 11.8. Grafana dashboard with KIE Server metrics for solvers Additional resources Prometheus Operator Getting started with the Prometheus Operator Prometheus RBAC Grafana Support for Prometheus Using Prometheus in Grafana OpenShift deployment options in Product documentation for Red Hat Decision Manager 7.13 11.3. Extending Prometheus metrics monitoring in KIE Server with custom metrics After you configure your KIE Server instance to use Prometheus metrics monitoring, you can extend the Prometheus functionality in KIE Server to use custom metrics according to your business needs. Prometheus then collects and stores your custom metrics along with the default metrics that KIE Server exposes with Prometheus. As an example, this procedure defines custom Decision Model and Notation (DMN) metrics to be collected and stored by Prometheus. Prerequisites Prometheus metrics monitoring is configured for your KIE Server instance. For information about Prometheus configuration with KIE Server on-premise, see Section 11.1, "Configuring Prometheus metrics monitoring for KIE Server" . For information about Prometheus configuration with KIE Server on Red Hat OpenShift Container Platform, see Section 11.2, "Configuring Prometheus metrics monitoring for KIE Server on Red Hat OpenShift Container Platform" . Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-prometheus</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-executor</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>io.prometheus</groupId> <artifactId>simpleclient</artifactId> <version>0.5.0</version> </dependency> </dependencies> Implement the relevant listener from the org.kie.server.services.prometheus.PrometheusMetricsProvider interface as part of the custom listener class that defines your custom Prometheus metrics, as shown in the following example: Sample implementation of the DMNRuntimeEventListener listener in a custom listener class package org.kie.server.ext.prometheus; import io.prometheus.client.Gauge; import org.kie.dmn.api.core.ast.DecisionNode; import org.kie.dmn.api.core.event.AfterEvaluateBKMEvent; import org.kie.dmn.api.core.event.AfterEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.BeforeEvaluateBKMEvent; import org.kie.dmn.api.core.event.BeforeEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.api.model.ReleaseId; import org.kie.server.services.api.KieContainerInstance; public class ExampleCustomPrometheusMetricListener implements DMNRuntimeEventListener { private final KieContainerInstance kieContainer; private final Gauge randomGauge = Gauge.build() .name("random_gauge_nanosecond") .help("Random gauge as an example of custom KIE Prometheus metric") .labelNames("container_id", "group_id", "artifact_id", "version", "decision_namespace", "decision_name") .register(); public ExampleCustomPrometheusMetricListener(KieContainerInstance containerInstance) { kieContainer = containerInstance; } public void beforeEvaluateDecision(BeforeEvaluateDecisionEvent e) { } public void afterEvaluateDecision(AfterEvaluateDecisionEvent e) { DecisionNode decisionNode = e.getDecision(); ReleaseId releaseId = kieContainer.getResource().getReleaseId(); randomGauge.labels(kieContainer.getContainerId(), releaseId.getGroupId(), releaseId.getArtifactId(), releaseId.getVersion(), decisionNode.getModelName(), decisionNode.getModelNamespace()) .set((int) (Math.random() * 100)); } public void beforeEvaluateBKM(BeforeEvaluateBKMEvent event) { } public void afterEvaluateBKM(AfterEvaluateBKMEvent event) { } public void beforeEvaluateContextEntry(BeforeEvaluateContextEntryEvent event) { } public void afterEvaluateContextEntry(AfterEvaluateContextEntryEvent event) { } public void beforeEvaluateDecisionTable(BeforeEvaluateDecisionTableEvent event) { } public void afterEvaluateDecisionTable(AfterEvaluateDecisionTableEvent event) { } public void beforeEvaluateDecisionService(BeforeEvaluateDecisionServiceEvent event) { } public void afterEvaluateDecisionService(AfterEvaluateDecisionServiceEvent event) { } } The PrometheusMetricsProvider interface contains the required listeners for collecting Prometheus metrics. The interface is incorporated by the kie-server-services-prometheus dependency that you declared in your project pom.xml file. In this example, the ExampleCustomPrometheusMetricListener class implements the DMNRuntimeEventListener listener (from the PrometheusMetricsProvider interface) and defines the custom DMN metrics to be collected and stored by Prometheus. Implement the PrometheusMetricsProvider interface as part of a custom metrics provider class that associates your custom listener with the PrometheusMetricsProvider interface, as shown in the following example: Sample implementation of the PrometheusMetricsProvider interface in a custom metrics provider class package org.kie.server.ext.prometheus; import org.jbpm.executor.AsynchronousJobListener; import org.jbpm.services.api.DeploymentEventListener; import org.kie.api.event.rule.AgendaEventListener; import org.kie.api.event.rule.DefaultAgendaEventListener; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.services.api.KieContainerInstance; import org.kie.server.services.prometheus.PrometheusMetricsProvider; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListener; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListenerAdapter; public class MyPrometheusMetricsProvider implements PrometheusMetricsProvider { public DMNRuntimeEventListener createDMNRuntimeEventListener(KieContainerInstance kContainer) { return new ExampleCustomPrometheusMetricListener(kContainer); } public AgendaEventListener createAgendaEventListener(String kieSessionId, KieContainerInstance kContainer) { return new DefaultAgendaEventListener(); } public PhaseLifecycleListener createPhaseLifecycleListener(String solverId) { return new PhaseLifecycleListenerAdapter() { }; } public AsynchronousJobListener createAsynchronousJobListener() { return null; } public DeploymentEventListener createDeploymentEventListener() { return null; } } In this example, the MyPrometheusMetricsProvider class implements the PrometheusMetricsProvider interface and includes your custom ExampleCustomPrometheusMetricListener listener class. To make the new metrics provider discoverable for KIE Server, create a META-INF/services/org.kie.server.services.prometheus.PrometheusMetricsProvider file in your Maven project and add the fully qualified class name of the PrometheusMetricsProvider implementation class within the file. For this example, the file contains the single line org.kie.server.ext.prometheus.MyPrometheusMetricsProvider . Build your project and copy the resulting JAR file into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . If you are deploying Red Hat Decision Manager on Red Hat OpenShift Container Platform, create a custom KIE Server image and add this JAR file to the image. For more information about creating a custom KIE Server image with an additional JAR file, see Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators . Start the KIE Server and deploy the built project to the running KIE Server. You can deploy the project using the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, Prometheus begins collecting metrics and KIE Server publishes the metrics to the REST API endpoint http://HOST:PORT/SERVER/services/rest/metrics (or on Spring Boot, to http://HOST:PORT/rest/metrics ). | [
"kieserver.drools.enabled=true kieserver.dmn.enabled=true kieserver.prometheus.enabled=true",
"scrape_configs: - job_name: 'kie-server' metrics_path: /SERVER_PATH/services/rest/metrics basicAuth: username: USER_NAME password: PASSWORD static_configs: - targets: [\"HOST:PORT\"]",
"scrape_configs: - job_name: 'kie' metrics_path: /rest/metrics static_configs: - targets: [\"HOST:PORT\"]",
"cd ~/EAP_HOME/bin ./standalone.sh --c standalone-full.xml",
"curl -u 'baAdmin:password@1' -X GET \"http://localhost:8080/kie-server/services/rest/metrics\"",
"curl -u 'baAdmin:password@1' -X GET \"http://localhost:8080/rest/metrics\"",
"HELP kie_server_container_started_total Kie Server Started Containers TYPE kie_server_container_started_total counter kie_server_container_started_total{container_id=\"task-assignment-kjar-1.0\",} 1.0 HELP solvers_running Number of solvers currently running TYPE solvers_running gauge solvers_running 0.0 HELP dmn_evaluate_decision_nanosecond DMN Evaluation Time TYPE dmn_evaluate_decision_nanosecond histogram HELP solver_duration_seconds Time in seconds it took solver to solve the constraint problem TYPE solver_duration_seconds summary solver_duration_seconds_count{solver_id=\"100tasks-5employees.xml\",} 1.0 solver_duration_seconds_sum{solver_id=\"100tasks-5employees.xml\",} 179.828255925 solver_duration_seconds_count{solver_id=\"24tasks-8employees.xml\",} 1.0 solver_duration_seconds_sum{solver_id=\"24tasks-8employees.xml\",} 179.995759653 HELP drl_match_fired_nanosecond Drools Firing Time TYPE drl_match_fired_nanosecond histogram HELP dmn_evaluate_failed_count DMN Evaluation Failed TYPE dmn_evaluate_failed_count counter HELP kie_server_start_time Kie Server Start Time TYPE kie_server_start_time gauge kie_server_start_time{name=\"myapp-kieserver\",server_id=\"myapp-kieserver\",location=\"http://myapp-kieserver-demo-monitoring.127.0.0.1.nip.io:80/services/rest/server\",version=\"7.4.0.redhat-20190428\",} 1.557221271502E12 HELP kie_server_container_running_total Kie Server Running Containers TYPE kie_server_container_running_total gauge kie_server_container_running_total{container_id=\"task-assignment-kjar-1.0\",} 1.0 HELP solver_score_calculation_speed Number of moves per second for a particular solver solving the constraint problem TYPE solver_score_calculation_speed summary solver_score_calculation_speed_count{solver_id=\"100tasks-5employees.xml\",} 1.0 solver_score_calculation_speed_sum{solver_id=\"100tasks-5employees.xml\",} 6997.0 solver_score_calculation_speed_count{solver_id=\"24tasks-8employees.xml\",} 1.0 solver_score_calculation_speed_sum{solver_id=\"24tasks-8employees.xml\",} 19772.0",
"set env dc/<dc_name> PROMETHEUS_SERVER_EXT_DISABLED=false -n <namespace>",
"apiVersion: app.kiegroup.org/v1 kind: KieApp metadata: name: enable-prometheus spec: environment: rhpam-trial objects: servers: - env: - name: PROMETHEUS_SERVER_EXT_DISABLED value: \"false\"",
"apiVersion: v1 kind: Service metadata: annotations: description: RHPAM Prometheus metrics exposed labels: app: myapp-kieserver application: myapp-kieserver template: myapp-kieserver metrics: rhpam name: rhpam-app-metrics spec: ports: - name: web port: 8080 protocol: TCP targetPort: 8080 selector: deploymentConfig: myapp-kieserver sessionAffinity: None type: ClusterIP",
"apply -f service-metrics.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: rhpam-service-monitor labels: team: frontend spec: selector: matchLabels: metrics: rhpam endpoints: - port: web path: /services/rest/metrics basicAuth: password: name: metrics-secret key: password username: name: metrics-secret key: username",
"apply -f service-monitor.yaml",
"<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-prometheus</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-executor</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>io.prometheus</groupId> <artifactId>simpleclient</artifactId> <version>0.5.0</version> </dependency> </dependencies>",
"package org.kie.server.ext.prometheus; import io.prometheus.client.Gauge; import org.kie.dmn.api.core.ast.DecisionNode; import org.kie.dmn.api.core.event.AfterEvaluateBKMEvent; import org.kie.dmn.api.core.event.AfterEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.BeforeEvaluateBKMEvent; import org.kie.dmn.api.core.event.BeforeEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.api.model.ReleaseId; import org.kie.server.services.api.KieContainerInstance; public class ExampleCustomPrometheusMetricListener implements DMNRuntimeEventListener { private final KieContainerInstance kieContainer; private final Gauge randomGauge = Gauge.build() .name(\"random_gauge_nanosecond\") .help(\"Random gauge as an example of custom KIE Prometheus metric\") .labelNames(\"container_id\", \"group_id\", \"artifact_id\", \"version\", \"decision_namespace\", \"decision_name\") .register(); public ExampleCustomPrometheusMetricListener(KieContainerInstance containerInstance) { kieContainer = containerInstance; } public void beforeEvaluateDecision(BeforeEvaluateDecisionEvent e) { } public void afterEvaluateDecision(AfterEvaluateDecisionEvent e) { DecisionNode decisionNode = e.getDecision(); ReleaseId releaseId = kieContainer.getResource().getReleaseId(); randomGauge.labels(kieContainer.getContainerId(), releaseId.getGroupId(), releaseId.getArtifactId(), releaseId.getVersion(), decisionNode.getModelName(), decisionNode.getModelNamespace()) .set((int) (Math.random() * 100)); } public void beforeEvaluateBKM(BeforeEvaluateBKMEvent event) { } public void afterEvaluateBKM(AfterEvaluateBKMEvent event) { } public void beforeEvaluateContextEntry(BeforeEvaluateContextEntryEvent event) { } public void afterEvaluateContextEntry(AfterEvaluateContextEntryEvent event) { } public void beforeEvaluateDecisionTable(BeforeEvaluateDecisionTableEvent event) { } public void afterEvaluateDecisionTable(AfterEvaluateDecisionTableEvent event) { } public void beforeEvaluateDecisionService(BeforeEvaluateDecisionServiceEvent event) { } public void afterEvaluateDecisionService(AfterEvaluateDecisionServiceEvent event) { } }",
"package org.kie.server.ext.prometheus; import org.jbpm.executor.AsynchronousJobListener; import org.jbpm.services.api.DeploymentEventListener; import org.kie.api.event.rule.AgendaEventListener; import org.kie.api.event.rule.DefaultAgendaEventListener; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.services.api.KieContainerInstance; import org.kie.server.services.prometheus.PrometheusMetricsProvider; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListener; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListenerAdapter; public class MyPrometheusMetricsProvider implements PrometheusMetricsProvider { public DMNRuntimeEventListener createDMNRuntimeEventListener(KieContainerInstance kContainer) { return new ExampleCustomPrometheusMetricListener(kContainer); } public AgendaEventListener createAgendaEventListener(String kieSessionId, KieContainerInstance kContainer) { return new DefaultAgendaEventListener(); } public PhaseLifecycleListener createPhaseLifecycleListener(String solverId) { return new PhaseLifecycleListenerAdapter() { }; } public AsynchronousJobListener createAsynchronousJobListener() { return null; } public DeploymentEventListener createDeploymentEventListener() { return null; } }"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/prometheus-monitoring-con_execution-server |
17.12. Saving and Restoring the Network Configuration | 17.12. Saving and Restoring the Network Configuration The command line version of Network Administration Tool can be used to save the system's network configuration to a file. This file can then be used to restore the network settings to a Red Hat Enterprise Linux system. This feature can be used as part of an automated backup script, to save the configuration before upgrading or reinstalling, or to copy the configuration to a different Red Hat Enterprise Linux system. To save, or export , the network configuration of a system to the file /tmp/network-config , execute the following command as root: To restore, or import , the network configuration from the file created from the command, execute the following command as root: The -i option means to import the data, the -c option means to clear the existing configuration prior to importing, and the -f option specifies that the file to import is as follows. | [
"system-config-network-cmd -e > /tmp/network-config",
"system-config-network-cmd -i -c -f /tmp/network-config"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-network-save-config |
Chapter 1. Operator APIs | Chapter 1. Operator APIs 1.1. Authentication [operator.openshift.io/v1] Description Authentication provides information to configure an operator to manage authentication. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. CloudCredential [operator.openshift.io/v1] Description CloudCredential provides a means to configure an operator to manage CredentialsRequests. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ClusterCSIDriver [operator.openshift.io/v1] Description ClusterCSIDriver object allows management and configuration of a CSI driver operator installed by default in OpenShift. Name of the object must be name of the CSI driver it operates. See CSIDriverName type for list of allowed values. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. Console [operator.openshift.io/v1] Description Console provides a means to configure an operator to manage the console. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. Config [operator.openshift.io/v1] Description Config specifies the behavior of the config operator which is responsible for creating the initial configuration of other components on the cluster. The operator also handles installation, migration or synchronization of cloud configurations for AWS and Azure cloud based clusters Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. Config [imageregistry.operator.openshift.io/v1] Description Config is the configuration object for a registry instance managed by the registry operator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. Config [samples.operator.openshift.io/v1] Description Config contains the configuration and detailed condition status for the Samples Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.8. CSISnapshotController [operator.openshift.io/v1] Description CSISnapshotController provides a means to configure an operator to manage the CSI snapshots. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. DNS [operator.openshift.io/v1] Description DNS manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. This supports the DNS-based service discovery specification: https://github.com/kubernetes/dns/blob/master/docs/specification.md More details: https://kubernetes.io/docs/tasks/administer-cluster/coredns Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.10. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.11. Etcd [operator.openshift.io/v1] Description Etcd provides information to configure an operator to manage etcd. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.12. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.13. ImagePruner [imageregistry.operator.openshift.io/v1] Description ImagePruner is the configuration object for an image registry pruner managed by the registry operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.14. IngressController [operator.openshift.io/v1] Description IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.15. InsightsOperator [operator.openshift.io/v1] Description InsightsOperator holds cluster-wide information about the Insights Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.16. KubeAPIServer [operator.openshift.io/v1] Description KubeAPIServer provides information to configure an operator to manage kube-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.17. KubeControllerManager [operator.openshift.io/v1] Description KubeControllerManager provides information to configure an operator to manage kube-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.18. KubeScheduler [operator.openshift.io/v1] Description KubeScheduler provides information to configure an operator to manage scheduler. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.19. KubeStorageVersionMigrator [operator.openshift.io/v1] Description KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.20. MachineConfiguration [operator.openshift.io/v1] Description MachineConfiguration provides information to configure an operator to manage Machine Configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.21. Network [operator.openshift.io/v1] Description Network describes the cluster's desired network configuration. It is consumed by the cluster-network-operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.22. OpenShiftAPIServer [operator.openshift.io/v1] Description OpenShiftAPIServer provides information to configure an operator to manage openshift-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.23. OpenShiftControllerManager [operator.openshift.io/v1] Description OpenShiftControllerManager provides information to configure an operator to manage openshift-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.24. OperatorPKI [network.operator.openshift.io/v1] Description OperatorPKI is a simple certificate authority. It is not intended for external use - rather, it is internal to the network operator. The CNO creates a CA and a certificate signed by that CA. The certificate has both ClientAuth and ServerAuth extended usages enabled. A Secret called <name>-ca with two data keys: tls.key - the private key tls.crt - the CA certificate A ConfigMap called <name>-ca with a single data key: cabundle.crt - the CA certificate(s) A Secret called <name>-cert with two data keys: tls.key - the private key tls.crt - the certificate, signed by the CA The CA certificate will have a validity of 10 years, rotated after 9. The target certificate will have a validity of 6 months, rotated after 3 The CA certificate will have a CommonName of "<namespace>_<name>-ca@<timestamp>", where <timestamp> is the last rotation time. Type object 1.25. ServiceCA [operator.openshift.io/v1] Description ServiceCA provides information to configure an operator to manage the service cert controllers Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.26. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object | [
"More specifically, given an OperatorPKI with <name>, the CNO will manage:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/operator-apis |
Chapter 11. Policies | Chapter 11. Policies Each OpenStack service contains resources that are managed by access policies. For example, a resource might include the following functions: Permission to create and start instances The ability to attach a volume to an instance If you are a Red Hat OpenStack Platform (RHOSP) administrator, you can create custom policies to introduce new roles with varying levels of access, or to change the default behavior of existing roles. Important Red Hat does not support customized roles or policies. Syntax errors or misapplied authorization can negatively impact security or usability. If you need customized roles or policies in your production environment, contact Red Hat support for a support exception. 11.1. Reviewing existing policies Policy files for services traditionally existed in the /etc/USDservice directory. For example, the full path of the policy.json file for Compute (nova) was /etc/nova/policy.json . There are two important architectural changes that affect how you can find existing policies: Red Hat OpenStack Platform is now containerized. Policy files, if present, are in the traditional path if you view them from inside the service container: /etc/USDservice/policy.json Policy files, if present, are in the following path if you view them from outside the service container: /var/lib/config-data/puppet-generated/USDservice/etc/USDservice/policy.json Each service has default policies that are provided in code, with files that are available only if you created them manually, or if they are generated with oslopolicy tooling. To generate a policy file, use the oslopolicy-policy-generator from within a container, as in the following example: By default, generated policies are pushed to stdout by osly.policy CLI tools. 11.2. Understanding service policies Service policy file statements are either alias definitions or rules. Alias definitions exist at the top of the file. The following list contains an explanation of the alias definitions from the generated policy.json file for Compute (nova): "context_is_admin": "role:admin" When rule:context_is_admin appears after a target, the policy checks that the user is operating with an administrative context before it allows that action. "admin_or_owner": "is_admin:True or project_id:%(project_id)s" When admin_or_owner appears after a target, the policy checks that the user is either an admin, or that their project ID matches the owning project ID of the target object before it allows that action. "admin_api": "is_admin:True When admin_api appears after a target, the policy checks that the user is an admin before it allows that action. 11.3. Policy syntax Policy.json files support certain operators so that you can control the target scope of these settings. For example, the following keystone setting contains the rule that only admin users can create users: The section to the left of the : character describes the privilege, and the section to the right defines who can use the privilege. You can also use operators to the right side to further control the scope: ! - No user (including admin) can perform this action. @ and "" - Any user can perform this action. not , and , or - Standard operator functions are available. For example, the following setting means that no users have permission to create new users: 11.4. Using policy files for access control Important Red Hat does not support customized roles or policies. Syntax errors or misapplied authorization can negatively impact security or usability. If you need customized roles or policies in your production environment, contact Red Hat support for a support exception. To override the default rules, edit the policy.json file for the appropriate OpenStack service. For example, the Compute service has a policy.json in the nova directory, which is the correct location of the file for containerized services when you view it from inside the container. Note You must thoroughly test changes to policy files in a staging environment before implementing them in production. You must check that any changes to the access control policies do not unintentionally weaken the security of any resource. In addition, any changes to a policy.json file are effective immediately and do not require a service restart. Example: Creating a power user role To customize the permissions of a keystone role, update the policy.json file of a service. This means that you can more granularly define the permissions that you assign to a class of users. This example creates a power user role for your deployment with the following privileges: Start an instance. Stop an instance. Manage the volumes that are attached to instances. The intention of this role is to grant additional permissions to certain users, without the need to then grant admin access. To use these privileges, you must grant the following permissions to a custom role: Start an instance: "os_compute_api:servers:start": "role:PowerUsers" Stop an instance: "os_compute_api:servers:stop": "role:PowerUsers" Configure an instance to use a particular volume: "os_compute_api:servers:create:attach_volume": "role:PowerUsers" List the volumes that are attached to an instance: "os_compute_api:os-volumes-attachments:index": "role:PowerUsers" Attach a volume: "os_compute_api:os-volumes-attachments:create": "role:PowerUsers" View the details of an attached volume: "os_compute_api:os-volumes-attachments:show": "role:PowerUsers" Change the volume that is attached to an instance: "os_compute_api:os-volumes-attachments:update": "role:PowerUsers" Delete a volume that is attached to an instance: "os_compute_api:os-volumes-attachments:delete": "role:PowerUsers" Note When you modify the policy.json file, you override the default policy. As a result, members of PowerUsers are the only users that can perform these actions. To allow admin users to retain these permissions, you can create rules for admin_or_power_user. You can also use some basic conditional logic to define role:PowerUsers or role:Admin . Create the custom keystone role: Add an existing user to the role, and assign the role to a project: Note A role assignment is paired exclusively with one project. This means that when you assign a role to a user, you also define the target project at the same time. If you want the user to receive the same role but for a different project, you must assign the role to them again separately but target the different project. View the default nova policy settings: Create custom permissions for the new PowerUsers role by adding the following entries to /var/lib/config-data/puppet-generated/nova/etc/nova/policy.json : Note Test your policy changes before deployment to verify that they work as you expect. You implement the changes when you save this file and restart the nova container. Users that are added to the PowerUsers keystone role receive these privileges. 11.5. Example: Limiting access based on attributes Important Red Hat does not support customized roles or policies. Syntax errors or misapplied authorization can negatively impact security or usability. If you need customized roles or policies in your production environment, contact Red Hat support for a support exception. You can create policies that will restrict access to API calls based on the attributes of the user making that API call. For example, the following default rule states that keypair deletion is allowed if run from an administrative context, or if the user ID of the token matches the user ID associated with the target. NOTE: * Newly implemented features are not guaranteed to be in every service with each release. Therefore, it is important to write rules using the conventions of the target service's existing policies. For details on viewing these policies, see Reviewing existing policies. * All policies should be rigorously tested in a non-production environment for every version on which they will be deployed, as policies are not guaranteed for compatibility across releases. Based on the above example, you can craft API rules to expand or restrict access to users based on whether or not they own a resource. Additionally, attributes can be combined with other restrictions to form rules as seen in the example below: Considering the examples above, you can create a unique rule limited to administrators and users, and then use that rule to further restrict actions: Additional resources Policy syntax . 11.6. Modifying policies with heat Important Red Hat does not support customized roles or policies. Syntax errors or misapplied authorization can negatively impact security or usability. If you need customized roles or policies in your production environment, contact Red Hat support for a support exception. You can use heat to configure access policies for certain services in the overcloud. Use the following parameters to set policies on the respective services: Table 11.1. Policy Parameters Parameter Description KeystonePolicies A hash of policies to configure for OpenStack Identity (keystone). IronicApiPolicies A hash of policies to configure for OpenStack Bare Metal (ironic) API. BarbicanPolicies A hash of policies to configure for OpenStack Key Manager (barbican). NeutronApiPolicies A hash of policies to configure for OpenStack Networking (neutron) API. SaharaApiPolicies A hash of policies to configure for OpenStack Clustering (sahara) API. NovaApiPolicies A hash of policies to configure for OpenStack Compute (nova) API. CinderApiPolicies A hash of policies to configure for OpenStack Block Storage (cinder) API. GlanceApiPolicies A hash of policies to configure for OpenStack Image Storage (glance) API. HeatApiPolicies A hash of policies to configure for OpenStack Orchestration (heat) API. To configure policies for a service, give the policy parameter a hash value that contains the service's policies For example: OpenStack Identity (keystone) uses the KeystonePolicies parameter. Set this parameter in the parameter_defaults section of an environment file: OpenStack Compute (nova) uses the NovaApiPolicies parameter. Set this parameter in the parameter_defaults section of an environment file: 11.7. Auditing your users and roles You can use tools available in Red Hat OpenStack Platform to build a report of role assignments per user and associated privileges. Prerequisites You have an installed Red Hat OpenStack Platform environment. You are logged into the director as stack. Procedure Run the openstack role list command to see the roles currently in your environment: Run the openstack role assignment list command to list all users that are members of a particular role. For example, to see all users that have the admin role, run the following: Note You can use the -f {csv,json,table,value,yaml} parameter to export these results. 11.8. Auditing API access You can audit the API calls a given role can access. Repeating this process for each role will result in a comprehensive report on the accessible APIs for each role. Prerequisites An authentication file to source as a user in the target role. An access token in JSON format. A policy file for each service's API you wish to audit. Procedure Start by sourcing an authentication file of a user in the desired role. Capture a Keystone generated token and save it to a file. You can do this by running any openstack-cli command and using the --debug option, which prints the provided token to stdout. You can copy this token and save it to an access file. Use the following command to do this as a single step: Create a policy file. This can be done on an overcloud node that hosts the containerized service of interest. The following example creates a policy file for the cinder service: Using these files, you can now audit the role in question for access to cinder's APIs: | [
"exec -it keystone oslopolicy-policy-generator --namespace keystone",
"\"identity:create_user\": \"rule:admin_required\"",
"\"identity:create_user\": \"!\"",
"openstack role create PowerUsers +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 7061a395af43455e9057ab631ad49449 | | name | PowerUsers | +-----------+----------------------------------+",
"openstack role add --project [PROJECT_NAME] --user [USER_ID] [PowerUsers-ROLE_ID]",
"oslopolicy-policy-generator --namespace nova",
"{ \"os_compute_api:servers:start\": \"role:PowerUsers\", \"os_compute_api:servers:stop\": \"role:PowerUsers\", \"os_compute_api:servers:create:attach_volume\": \"role:PowerUsers\", \"os_compute_api:os-volumes-attachments:index\": \"role:PowerUsers\", \"os_compute_api:os-volumes-attachments:create\": \"role:PowerUsers\", \"os_compute_api:os-volumes-attachments:show\": \"role:PowerUsers\", \"os_compute_api:os-volumes-attachments:update\": \"role:PowerUsers\", \"os_compute_api:os-volumes-attachments:delete\": \"role:PowerUsers\" }",
"\"os_compute_api:os-keypairs:delete\": \"rule:admin_api or user_id:%(user_id)s\"",
"\"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\"",
"\"admin_or_user\": \"is_admin:True or user_id:%(user_id)s\" \"os_compute_api:os-instance-actions\": \"rule:admin_or_user\"",
"parameter_defaults: KeystonePolicies: { keystone-context_is_admin: { key: context_is_admin, value: 'role:admin' } }",
"parameter_defaults: NovaApiPolicies: { nova-context_is_admin: { key: 'compute:get_all', value: '@' } }",
"openstack role list -c Name -f value swiftoperator ResellerAdmin admin _member_ heat_stack_user",
"openstack role assignment list --names --role admin +-------+------------------------------------+-------+-----------------+------------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +-------+------------------------------------+-------+-----------------+------------+--------+-----------+ | admin | heat-cfn@Default | | service@Default | | | False | | admin | placement@Default | | service@Default | | | False | | admin | neutron@Default | | service@Default | | | False | | admin | zaqar@Default | | service@Default | | | False | | admin | swift@Default | | service@Default | | | False | | admin | admin@Default | | admin@Default | | | False | | admin | zaqar-websocket@Default | | service@Default | | | False | | admin | heat@Default | | service@Default | | | False | | admin | ironic-inspector@Default | | service@Default | | | False | | admin | nova@Default | | service@Default | | | False | | admin | ironic@Default | | service@Default | | | False | | admin | glance@Default | | service@Default | | | False | | admin | mistral@Default | | service@Default | | | False | | admin | heat_stack_domain_admin@heat_stack | | | heat_stack | | False | | admin | admin@Default | | | | all | False | +-------+------------------------------------+-------+-----------------+------------+--------+-----------+",
"openstack token issue --debug 2>&1 | egrep ^'{\\\"token\\\":' > access.file.json",
"ssh tripleo-admin@CONTROLLER-1 sudo podman exec cinder_api oslopolicy-policy-generator --config-file /etc/cinder/cinder.conf --namespace cinder > cinderpolicy.json",
"oslopolicy-checker --policy cinderpolicy.json --access access.file.json"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly_policies_security_and_hardening |
Chapter 8. Logging using LokiStack | Chapter 8. Logging using LokiStack In logging subsystem documentation, LokiStack refers to the logging subsystem supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the logging subsystem. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. You can query Loki by using the LogQL log query language . 8.1. Deployment Sizing Sizing for Loki follows the format of N<x>. <size> where the value <N> is number of instances and <size> specifies performance capabilities. Note 1x.extra-small is for demo purposes only, and is not supported. Table 8.1. Loki Sizing 1x.extra-small 1x.small 1x.medium Data transfer Demo use only. 500GB/day 2TB/day Queries per second (QPS) Demo use only. 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 3 Total CPU requests 5 vCPUs 36 vCPUs 54 vCPUs Total Memory requests 7.5Gi 63Gi 139Gi Total Disk requests 150Gi 300Gi 450Gi 8.1.1. Supported API Custom Resource Definitions LokiStack development is ongoing, not all APIs are supported currently supported. CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported in 5.5 RulerConfig rulerconfig.loki.grafana/v1beta1 Technology Preview AlertingRule alertingrule.loki.grafana/v1beta1 Technology Preview RecordingRule recordingrule.loki.grafana/v1beta1 Technology Preview Important Usage of RulerConfig , AlertingRule and RecordingRule custom resource definitions (CRDs). is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.2. Deploying the LokiStack You can use the OpenShift Container Platform web console to deploy the LokiStack. Prerequisites Logging subsystem for Red Hat OpenShift Operator 5.5 and later Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation) Procedure Install the Loki Operator Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Loki Operator from the list of available Operators, and click Install . Under Installation Mode , select All namespaces on the cluster . Under Installed Namespace , select openshift-operators-redhat . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and might publish a metric with the same name as an OpenShift Container Platform metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that you installed the Loki Operator. Visit the Operators Installed Operators page and look for Loki Operator . Ensure that Loki Operator is listed with Status as Succeeded in all the projects. Create a Secret YAML file that uses the access_key_id and access_key_secret fields to specify your AWS credentials and bucketnames , endpoint and region to define the object storage location. For example: apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 Create the LokiStack custom resource (CR): apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small storage: schemas: - version: v12 effectiveDate: "2022-06-01" secret: name: logging-loki-s3 type: s3 storageClassName: gp2 tenants: mode: openshift-logging Apply the LokiStack CR: USD oc apply -f logging-loki.yaml Create a ClusterLogging custom resource (CR): apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector Apply the ClusterLogging CR: USD oc apply -f cr-lokistack.yaml Enable the RedHat OpenShift Logging Console Plugin: In the OpenShift Container Platform web console, click Operators Installed Operators . Select the RedHat OpenShift Logging Operator. Under Console plugin, click Disabled . Select Enable and then Save . This change restarts the openshift-console pods. After the pods restart, you will receive a notification that a web console update is available, prompting you to refresh. After refreshing the web console, click Observe from the left main menu. A new option for Logs is available. 8.3. Forwarding logs to LokiStack To configure log forwarding to the LokiStack gateway, you must create a ClusterLogging custom resource (CR). Prerequisites The Logging subsystem for Red Hat OpenShift version 5.5 or newer is installed on your cluster. The Loki Operator is installed on your cluster. Procedure Create a ClusterLogging custom resource (CR): apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector 8.3.1. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging subsystem to a cluster that already has some logs, rate limit errors might occur while the logging subsystem tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 8.4. Additional Resources Loki Query Language (LogQL) Documentation Grafana Dashboard Documentation Loki Object Storage Documentation Loki Operator IngestionLimitSpec Documentation Loki Storage Schema Documentation | [
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 type: s3 storageClassName: gp2 tenants: mode: openshift-logging",
"oc apply -f logging-loki.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector",
"oc apply -f cr-lokistack.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/cluster-logging-loki |
Chapter 54. Using external identity providers to authenticate to IdM | Chapter 54. Using external identity providers to authenticate to IdM You can associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. When these users authenticate with the SSSD version available in RHEL 9.1 or later, they receive RHEL Identity Management (IdM) single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. Notable features include: Adding, modifying, and deleting references to external IdPs with ipa idp-* commands. Enabling IdP authentication for users with the ipa user-mod --user-auth-type=idp command. 54.1. The benefits of connecting IdM to an external IdP As an administrator, you might want to allow users stored in an external identity source, such as a cloud services provider, to access RHEL systems joined to your Identity Management (IdM) environment. To achieve this, you can delegate the authentication and authorization process of issuing Kerberos tickets for these users to that external entity. You can use this feature to expand IdM's capabilities and allow users stored in external identity providers (IdPs) to access Linux systems managed by IdM. 54.2. How IdM incorporates logins via external IdPs SSSD 2.7.0 contains the sssd-idp package, which implements the idp Kerberos pre-authentication method. This authentication method follows the OAuth 2.0 Device Authorization Grant flow to delegate authorization decisions to external IdPs: An IdM client user initiates OAuth 2.0 Device Authorization Grant flow, for example, by attempting to retrieve a Kerberos TGT with the kinit utility at the command line. A special code and website link are sent from the Authorization Server to the IdM KDC backend. The IdM client displays the link and the code to the user. In this example, the IdM client outputs the link and code on the command line. The user opens the website link in a browser, which can be on another host, a mobile phone, and so on: The user enters the special code. If necessary, the user logs in to the OAuth 2.0-based IdP. The user is prompted to authorize the client to access information. The user confirms access at the original device prompt. In this example, the user hits the Enter key at the command line. The IdM KDC backend polls the OAuth 2.0 Authorization Server for access to user information. What is supported: Logging in remotely via SSH with the keyboard-interactive authentication method enabled, which allows calling Pluggable Authentication Module (PAM) libraries. Logging in locally with the console via the logind service. Retrieving a Kerberos ticket-granting ticket (TGT) with the kinit utility. What is currently not supported: Logging in to the IdM WebUI directly. To log in to the IdM WebUI, you must first acquire a Kerberos ticket. Logging in to Cockpit WebUI directly. To log in to the Cockpit WebUI, you must first acquire a Kerberos ticket. Additional resources Authentication against external Identity Providers RFC 8628: OAuth 2.0 Device Authorization Grant 54.3. Creating a reference to an external identity provider To connect external identity providers (IdPs) to your Identity Management (IdM) environment, create IdP references in IdM. Complete this procedure to create a reference called my-keycloak-idp to an IdP based on the Keycloak template. For more reference templates, see Example references to different external IdPs in IdM . Prerequisites You have registered IdM as an OAuth application to your external IdP, and obtained a client ID. You can authenticate as the IdM admin account. Your IdM servers are using RHEL 9.1 or later. Your IdM servers are using SSSD 2.7.0 or later. Procedure Authenticate as the IdM admin on an IdM server. Create a reference called my-keycloak-idp to an IdP based on the Keycloak template, where the --base-url option specifies the URL to the Keycloak server in the format server-name.USDDOMAIN:USDPORT/prefix . Verification Verify that the output of the ipa idp-show command shows the IdP reference you have created. Additional resources Example references to different external IdPs in IdM Options for the ipa idp-* commands to manage external identity providers in IdM The --provider option in the ipa idp-* commands ipa help idp-add 54.4. Example references to different external IdPs in IdM The following table lists examples of the ipa idp-add command for creating references to different IdPs in IdM. Identity Provider Important options Command example Microsoft Identity Platform, Azure AD --provider microsoft --organization Google --provider google GitHub --provider github Keycloak, Red Hat Single Sign-On --provider keycloak --organization --base-url Note The Quarkus version of Keycloak 17 and later have removed the /auth/ portion of the URI. If you use the non-Quarkus distribution of Keycloak in your deployment, include /auth/ in the --base-url option. Okta --provider okta Additional resources Creating a reference to an external identity provider Options for the ipa idp-* commands to manage external identity providers in IdM The --provider option in the ipa idp-* commands 54.5. Options for the ipa idp-* commands to manage external identity providers in IdM The following examples show how to configure references to external IdPs based on the different IdP templates. Use the following options to specify your settings: --provider the predefined template for one of the known identity providers --client-id the OAuth 2.0 client identifier issued by the IdP during application registration. As the application registration procedure is specific to each IdP, refer to their documentation for details. If the external IdP is Red Hat Single Sign-On (SSO), see Creating an OpenID Connect Client . --base-url base URL for IdP templates, required by Keycloak and Okta --organization Domain or Organization ID from the IdP, required by Microsoft Azure --secret (optional) Use this option if you have configured your external IdP to require a secret from confidential OAuth 2.0 clients. If you use this option when creating an IdP reference, you are prompted for the secret interactively. Protect the client secret as a password. Note SSSD in RHEL 9.1 only supports non-confidential OAuth 2.0 clients that do not use a client secret. If you want to use external IdPs that require a client secret from confidential clients, you must use SSSD in RHEL 9.2 and later. Additional resources Creating a reference to an external identity provider Example references to different external IdPs in IdM The --provider option in the ipa idp-* commands 54.6. Managing references to external IdPs After you have created a reference to an external identity provider (IdP), you can find, show, modify, and delete that reference. This example shows you how to manage a reference to an external IdP named keycloak-server1 . Prerequisites You can authenticate as the IdM admin account. Your IdM servers are using RHEL 9.1 or later. Your IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . Procedure Authenticate as the IdM admin on an IdM server. Manage the IdP reference. To find an IdP reference whose entry includes the string keycloak : To display an IdP reference named my-keycloak-idp : To modify an IdP reference, use the ipa idp-mod command. For example, to change the secret for an IdP reference named my-keycloak-idp , specify the --secret option to be prompted for the secret: To delete an IdP reference named my-keycloak-idp : 54.7. Enabling an IdM user to authenticate via an external IdP To enable an IdM user to authenticate via an external identity provider (IdP), associate the external IdP reference you have previously created with the user account. This example associates the external IdP reference keycloak-server1 with the user idm-user-with-external-idp . Prerequisites Your IdM client and IdM servers are using RHEL 9.1 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . Procedure Modify the IdM user entry to associate an IdP reference with the user account: Verification Verify that the output of the ipa user-show command for that user displays references to the IdP: 54.8. Retrieving an IdM ticket-granting ticket as an external IdP user If you have delegated authentication for an Identity Management (IdM) user to an external identity provider (IdP), the IdM user can request a Kerberos ticket-granting ticket (TGT) by authenticating to the external IdP. Complete this procedure to: Retrieve and store an anonymous Kerberos ticket locally. Request the TGT for the idm-user-with-external-idp user by using kinit with the -T option to enable Flexible Authentication via Secure Tunneling (FAST) channel to provide a secure connection between the Kerberos client and Kerberos Distribution Center (KDC). Prerequisites Your IdM client and IdM servers use RHEL 9.1 or later. Your IdM client and IdM servers use SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . You have associated an external IdP reference with the user account. See Enabling an IdM user to authenticate via an external IdP . The user that you are initially logged in as has write permissions on a directory in the local filesystem. Procedure Use Anonymous PKINIT to obtain a Kerberos ticket and store it in a file named ./fast.ccache . Optional: View the retrieved ticket: Begin authenticating as the IdM user, using the -T option to enable the FAST communication channel. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. The pa_type = 152 indicates external IdP authentication. 54.9. Logging in to an IdM client via SSH as an external IdP user To log in to an IdM client via SSH as an external identity provider (IdP) user, begin the login process on the command linel. When prompted, perform the authentication process at the website associated with the IdP, and finish the process at the Identity Management (IdM) client. Prerequisites Your IdM client and IdM servers are using RHEL 9.1 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Creating a reference to an external identity provider . You have associated an external IdP reference with the user account. See Enabling an IdM user to authenticate via an external IdP . Procedure Attempt to log in to the IdM client via SSH. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. 54.10. The --provider option in the ipa idp-* commands The following identity providers (IdPs) support OAuth 2.0 device authorization grant flow: Microsoft Identity Platform, including Azure AD Google GitHub Keycloak, including Red Hat Single Sign-On (SSO) Okta When using the ipa idp-add command to create a reference to one of these external IdPs, you can specify the IdP type with the --provider option, which expands into additional options as described below: --provider=microsoft Microsoft Azure IdPs allow parametrization based on the Azure tenant ID, which you can specify with the --organization option to the ipa idp-add command. If you need support for the live.com IdP, specify the option --organization common . Choosing --provider=microsoft expands to use the following options. The value of the --organization option replaces the string USD{ipaidporg} in the table. Option Value --auth-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/authorize --dev-auth-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/devicecode --token-uri=URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/token --userinfo-uri=URI https://graph.microsoft.com/oidc/userinfo --keys-uri=URI https://login.microsoftonline.com/common/discovery/v2.0/keys --scope=STR openid email --idp-user-id=STR email --provider=google Choosing --provider=google expands to use the following options: Option Value --auth-uri=URI https://accounts.google.com/o/oauth2/auth --dev-auth-uri=URI https://oauth2.googleapis.com/device/code --token-uri=URI https://oauth2.googleapis.com/token --userinfo-uri=URI https://openidconnect.googleapis.com/v1/userinfo --keys-uri=URI https://www.googleapis.com/oauth2/v3/certs --scope=STR openid email --idp-user-id=STR email --provider=github Choosing --provider=github expands to use the following options: Option Value --auth-uri=URI https://github.com/login/oauth/authorize --dev-auth-uri=URI https://github.com/login/device/code --token-uri=URI https://github.com/login/oauth/access_token --userinfo-uri=URI https://openidconnect.googleapis.com/v1/userinfo --keys-uri=URI https://api.github.com/user --scope=STR user --idp-user-id=STR login --provider=keycloak With Keycloak, you can define multiple realms or organizations. Since it is often a part of a custom deployment, both base URL and realm ID are required, and you can specify them with the --base-url and --organization options to the ipa idp-add command: Choosing --provider=keycloak expands to use the following options. The value you specify in the --base-url option replaces the string USD{ipaidpbaseurl} in the table, and the value you specify for the --organization `option replaces the string `USD{ipaidporg} . Option Value --auth-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth --dev-auth-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth/device --token-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/token --userinfo-uri=URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/userinfo --scope=STR openid email --idp-user-id=STR email --provider=okta After registering a new organization in Okta, a new base URL is associated with it. You can specify this base URL with the --base-url option to the ipa idp-add command: Choosing --provider=okta expands to use the following options. The value you specify for the --base-url option replaces the string USD{ipaidpbaseurl} in the table. Option Value --auth-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/authorize --dev-auth-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/device/authorize --token-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/token --userinfo-uri=URI https://USD{ipaidpbaseurl}/oauth2/v1/userinfo --scope=STR openid email --idp-user-id=STR email Additional resources Pre-populated IdP templates | [
"kinit admin",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id id13778 ------------------------------------------------ Added Identity Provider reference \"my-keycloak-idp\" ------------------------------------------------ Identity Provider reference name: my-keycloak-idp Authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth Device authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth/device Token URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/token User info URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/userinfo Client identifier: ipa_oidc_client Scope: openid email External IdP user identifier attribute: email",
"ipa idp-show my-keycloak-idp",
"ipa idp-add my-azure-idp --provider microsoft --organization main --client-id <azure_client_id>",
"ipa idp-add my-google-idp --provider google --client-id <google_client_id>",
"ipa idp-add my-github-idp --provider github --client-id <github_client_id>",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id <keycloak_client_id>",
"ipa idp-add my-okta-idp --provider okta --base-url dev-12345.okta.com --client-id <okta_client_id>",
"kinit admin",
"ipa idp-find keycloak",
"ipa idp-show my-keycloak-idp",
"ipa idp-mod my-keycloak-idp --secret",
"ipa idp-del my-keycloak-idp",
"ipa user-mod idm-user-with-external-idp --idp my-keycloak-idp --idp-user-id [email protected] --user-auth-type=idp --------------------------------- Modified user \"idm-user-with-external-idp\" --------------------------------- User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-show idm-user-with-external-idp User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] ID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"kinit -n -c ./fast.ccache",
"klist -c fast.ccache Ticket cache: FILE:fast.ccache Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/03/2024 13:36:37 03/04/2024 13:14:28 krbtgt/[email protected]",
"kinit -T ./fast.ccache idm-user-with-external-idp Authenticate at https://oauth2.idp.com:8443/auth/realms/master/device?user_code=YHMQ-XKTL and press ENTER.:",
"klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"[user@client ~]USD ssh [email protected] ([email protected]) Authenticate at https://oauth2.idp.com:8443/auth/realms/main/device?user_code=XYFL-ROYR and press ENTER.",
"[idm-user-with-external-idp@client ~]USD klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"ipa idp-add MySSO --provider keycloak --org main --base-url keycloak.domain.com:8443/auth --client-id <your-client-id>",
"ipa idp-add MyOkta --provider okta --base-url dev-12345.okta.com --client-id <your-client-id>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_using-external-identity-providers-to-authenticate-to-idm_managing-users-groups-hosts |
10.2. About PAM Configuration Files | 10.2. About PAM Configuration Files Each PAM-aware application or service has a file in the /etc/pam.d/ directory. Each file in this directory has the same name as the service to which it controls access. For example, the login program defines its service name as login and installs the /etc/pam.d/login PAM configuration file. Warning It is highly recommended to configure PAMs using the authconfig tool instead of manually editing the PAM configuration files. 10.2.1. PAM Configuration File Format Each PAM configuration file contains a group of directives that define the module (the authentication configuration area) and any controls or arguments with it. The directives all have a simple syntax that identifies the module purpose (interface) and the configuration settings for the module. In a PAM configuration file, the module interface is the first field defined. For example: A PAM interface is essentially the type of authentication action which that specific module can perform. Four types of PAM module interface are available, each corresponding to a different aspect of the authentication and authorization process: auth - This module interface authenticates users. For example, it requests and verifies the validity of a password. Modules with this interface can also set credentials, such as group memberships. account - This module interface verifies that access is allowed. For example, it checks if a user account has expired or if a user is allowed to log in at a particular time of day. password - This module interface is used for changing user passwords. session - This module interface configures and manages user sessions. Modules with this interface can also perform additional tasks that are needed to allow access, like mounting a user's home directory and making the user's mailbox available. An individual module can provide any or all module interfaces. For instance, pam_unix.so provides all four module interfaces. The module name, such as pam_unix.so , provides PAM with the name of the library containing the specified module interface. The directory name is omitted because the application is linked to the appropriate version of libpam , which can locate the correct version of the module. All PAM modules generate a success or failure result when called. Control flags tell PAM what to do with the result. Modules can be listed ( stacked ) in a particular order, and the control flags determine how important the success or failure of a particular module is to the overall goal of authenticating the user to the service. There are several simple flags [2] , which use only a keyword to set the configuration: required - The module result must be successful for authentication to continue. If the test fails at this point, the user is not notified until the results of all module tests that reference that interface are complete. requisite - The module result must be successful for authentication to continue. However, if a test fails at this point, the user is notified immediately with a message reflecting the first failed required or requisite module test. sufficient - The module result is ignored if it fails. However, if the result of a module flagged sufficient is successful and no modules flagged required have failed, then no other results are required and the user is authenticated to the service. optional - The module result is ignored. A module flagged as optional only becomes necessary for successful authentication when no other modules reference the interface. include - Unlike the other controls, this does not relate to how the module result is handled. This flag pulls in all lines in the configuration file which match the given parameter and appends them as an argument to the module. Module interface directives can be stacked , or placed upon one another, so that multiple modules are used together for one purpose. Note If a module's control flag uses the sufficient or requisite value, then the order in which the modules are listed is important to the authentication process. Using stacking, the administrator can require specific conditions to exist before the user is allowed to authenticate. For example, the setup utility normally uses several stacked modules, as seen in its PAM configuration file: auth sufficient pam_rootok.so - This line uses the pam_rootok.so module to check whether the current user is root, by verifying that their UID is 0. If this test succeeds, no other modules are consulted and the command is executed. If this test fails, the module is consulted. auth include system-auth - This line includes the content of the /etc/pam.d/system-auth module and processes this content for authentication. account required pam_permit.so - This line uses the pam_permit.so module to allow the root user or anyone logged in at the console to reboot the system. session required pam_permit.so - This line is related to the session setup. Using pam_permit.so , it ensures that the setup utility does not fail. PAM uses arguments to pass information to a pluggable module during authentication for some modules. For example, the pam_pwquality.so module checks how strong a password is and can take several arguments. In the following example, enforce_for_root specifies that even password of the root user must successfully pass the strength check and retry defines that a user will receive three opportunities to enter a strong password. Invalid arguments are generally ignored and do not otherwise affect the success or failure of the PAM module. Some modules, however, may fail on invalid arguments. Most modules report errors to the journald service. For information on how to use journald and the related journalctl tool, see the System Administrator's Guide . Note The journald service was introduced in Red Hat Enterprise Linux 7.1. In versions of Red Hat Enterprise Linux, most modules report errors to the /var/log/secure file. 10.2.2. Annotated PAM Configuration Example Example 10.1, "Simple PAM Configuration" is a sample PAM application configuration file: Example 10.1. Simple PAM Configuration The first line is a comment, indicated by the hash mark ( # ) at the beginning of the line. Lines two through four stack three modules for login authentication. auth required pam_securetty.so - This module ensures that if the user is trying to log in as root, the TTY on which the user is logging in is listed in the /etc/securetty file, if that file exists. If the TTY is not listed in the file, any attempt to log in as root fails with a Login incorrect message. auth required pam_unix.so nullok - This module prompts the user for a password and then checks the password using the information stored in /etc/passwd and, if it exists, /etc/shadow . The argument nullok instructs the pam_unix.so module to allow a blank password. auth required pam_nologin.so - This is the final authentication step. It checks whether the /etc/nologin file exists. If it exists and the user is not root, authentication fails. Note In this example, all three auth modules are checked, even if the first auth module fails. This prevents the user from knowing at what stage their authentication failed. Such knowledge in the hands of an attacker could allow them to more easily deduce how to crack the system. account required pam_unix.so - This module performs any necessary account verification. For example, if shadow passwords have been enabled, the account interface of the pam_unix.so module checks to see if the account has expired or if the user has not changed the password within the allowed grace period. password required pam_pwquality.so retry=3 - If a password has expired, the password component of the pam_pwquality.so module prompts for a new password. It then tests the newly created password to see whether it can easily be determined by a dictionary-based password cracking program. The argument retry=3 specifies that if the test fails the first time, the user has two more chances to create a strong password. password required pam_unix.so shadow nullok use_authtok - This line specifies that if the program changes the user's password, using the password interface of the pam_unix.so module. The argument shadow instructs the module to create shadow passwords when updating a user's password. The argument nullok instructs the module to allow the user to change their password from a blank password, otherwise a null password is treated as an account lock. The final argument on this line, use_authtok , provides a good example of the importance of order when stacking PAM modules. This argument instructs the module not to prompt the user for a new password. Instead, it accepts any password that was recorded by a password module. In this way, all new passwords must pass the pam_pwquality.so test for secure passwords before being accepted. session required pam_unix.so - The final line instructs the session interface of the pam_unix.so module to manage the session. This module logs the user name and the service type to /var/log/secure at the beginning and end of each session. This module can be supplemented by stacking it with other session modules for additional functionality. [2] There are many complex control flags that can be set. These are set in attribute=value pairs; a complete list of attributes is available in the pam.d manpage. | [
"module_interface control_flag module_name module_arguments",
"auth required pam_unix.so",
"cat /etc/pam.d/setup auth sufficient pam_rootok.so auth include system-auth account required pam_permit.so session required pam_permit.so",
"password requisite pam_pwquality.so enforce_for_root retry=3",
"#%PAM-1.0 auth required pam_securetty.so auth required pam_unix.so nullok auth required pam_nologin.so account required pam_unix.so password required pam_pwquality.so retry=3 password required pam_unix.so shadow nullok use_authtok session required pam_unix.so"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/pam_configuration_files |
Chapter 8. OperatorGroup [operators.coreos.com/v1] | Chapter 8. OperatorGroup [operators.coreos.com/v1] Description OperatorGroup is the unit of multitenancy for OLM managed operators. It constrains the installation of operators in its namespace to a specified set of target namespaces. Type object Required metadata 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorGroupSpec is the spec for an OperatorGroup resource. status object OperatorGroupStatus is the status for an OperatorGroupResource. 8.1.1. .spec Description OperatorGroupSpec is the spec for an OperatorGroup resource. Type object Property Type Description selector object Selector selects the OperatorGroup's target namespaces. serviceAccountName string ServiceAccountName is the admin specified service account which will be used to deploy operator(s) in this operator group. staticProvidedAPIs boolean Static tells OLM not to update the OperatorGroup's providedAPIs annotation targetNamespaces array (string) TargetNamespaces is an explicit set of namespaces to target. If it is set, Selector is ignored. upgradeStrategy string UpgradeStrategy defines the upgrade strategy for operators in the namespace. There are currently two supported upgrade strategies: Default: OLM will only allow clusterServiceVersions to move to the replacing phase from the succeeded phase. This effectively means that OLM will not allow operators to move to the version if an installation or upgrade has failed. TechPreviewUnsafeFailForward: OLM will allow clusterServiceVersions to move to the replacing phase from the succeeded phase or from the failed phase. Additionally, OLM will generate new installPlans when a subscription references a failed installPlan and the catalog has been updated with a new upgrade for the existing set of operators. WARNING: The TechPreviewUnsafeFailForward upgrade strategy is unsafe and may result in unexpected behavior or unrecoverable data loss unless you have deep understanding of the set of operators being managed in the namespace. 8.1.2. .spec.selector Description Selector selects the OperatorGroup's target namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.5. .status Description OperatorGroupStatus is the status for an OperatorGroupResource. Type object Required lastUpdated Property Type Description conditions array Conditions is an array of the OperatorGroup's conditions. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string LastUpdated is a timestamp of the last time the OperatorGroup's status was Updated. namespaces array (string) Namespaces is the set of target namespaces for the OperatorGroup. serviceAccountRef object ServiceAccountRef references the service account object specified. 8.1.6. .status.conditions Description Conditions is an array of the OperatorGroup's conditions. Type array 8.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 8.1.8. .status.serviceAccountRef Description ServiceAccountRef references the service account object specified. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/operatorgroups GET : list objects of kind OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups DELETE : delete collection of OperatorGroup GET : list objects of kind OperatorGroup POST : create an OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name} DELETE : delete an OperatorGroup GET : read the specified OperatorGroup PATCH : partially update the specified OperatorGroup PUT : replace the specified OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name}/status GET : read status of the specified OperatorGroup PATCH : partially update status of the specified OperatorGroup PUT : replace status of the specified OperatorGroup 8.2.1. /apis/operators.coreos.com/v1/operatorgroups HTTP method GET Description list objects of kind OperatorGroup Table 8.1. HTTP responses HTTP code Reponse body 200 - OK OperatorGroupList schema 401 - Unauthorized Empty 8.2.2. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups HTTP method DELETE Description delete collection of OperatorGroup Table 8.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorGroup Table 8.3. HTTP responses HTTP code Reponse body 200 - OK OperatorGroupList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorGroup Table 8.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.5. Body parameters Parameter Type Description body OperatorGroup schema Table 8.6. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 202 - Accepted OperatorGroup schema 401 - Unauthorized Empty 8.2.3. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name} Table 8.7. Global path parameters Parameter Type Description name string name of the OperatorGroup HTTP method DELETE Description delete an OperatorGroup Table 8.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorGroup Table 8.10. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorGroup Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.12. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorGroup Table 8.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.14. Body parameters Parameter Type Description body OperatorGroup schema Table 8.15. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 401 - Unauthorized Empty 8.2.4. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name}/status Table 8.16. Global path parameters Parameter Type Description name string name of the OperatorGroup HTTP method GET Description read status of the specified OperatorGroup Table 8.17. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorGroup Table 8.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.19. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorGroup Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.21. Body parameters Parameter Type Description body OperatorGroup schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operatorhub_apis/operatorgroup-operators-coreos-com-v1 |
Chapter 2. Configuring your firewall | Chapter 2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to find the exact endpoints to allow for the regions that you use. AWS aws.amazon.com 443 Used to install and manage clusters in an AWS environment. *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must include the following URLs in your allowlist: 443 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to find the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. *.s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. *.cloudfront.net 443 Used to provide access to CloudFront. If you use the AWS Security Token Service (STS) and the private S3 bucket, you must provide access to CloudFront. GCP *.googleapis.com 443 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to find the endpoints to allow for your APIs. accounts.google.com 443 Required to access your GCP account. Microsoft Azure management.azure.com 443 Required to access Microsoft Azure services and resources. Review the Microsoft Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. *.blob.core.windows.net 443 Required to download Ignition files. login.microsoftonline.com 443 Required to access Microsoft Azure services and resources. Review the Azure REST API reference in the Microsoft Azure documentation to find the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function *.apps.<cluster_name>.<base_domain> 443 Required to access the default cluster routes unless you set an ingress wildcard during installation. api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. console.redhat.com 443 Required for your cluster token. mirror.openshift.com 443 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. quayio-production-s3.s3.amazonaws.com 443 Required to access Quay image content in AWS. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com storage.googleapis.com/openshift-release 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. Additional resources OpenID Connect requirements for AWS STS | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installation_configuration/configuring-firewall |
13.2. Creating a Partition | 13.2. Creating a Partition Warning Do not attempt to create a partition on a device that is in use. Procedure 13.1. Creating a partition Before creating a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). Start parted , where /dev/ sda is the device on which to create the partition: View the current partition table to determine if there is enough free space: If there is not enough free space, you can resize an existing partition. Refer to Section 13.4, "Resizing a Partition" for details. 13.2.1. Making the Partition From the partition table, determine the start and end points of the new partition and what partition type it should be. You can only have four primary partitions (with no extended partition) on a device. If you need more than four partitions, you can have three primary partitions, one extended partition, and multiple logical partitions within the extended. For an overview of disk partitions, refer to the appendix An Introduction to Disk Partitions in the Red Hat Enterprise Linux 6 Installation Guide . For example, to create a primary partition with an ext3 file system from 1024 megabytes until 2048 megabytes on a hard drive type the following command: Note If you use the mkpartfs command instead, the file system is created after the partition is created. However, parted does not support creating an ext3 file system. Thus, if you wish to create an ext3 file system, use mkpart and create the file system with the mkfs command as described later. The changes start taking place as soon as you press Enter , so review the command before executing to it. After creating the partition, use the print command to confirm that it is in the partition table with the correct partition type, file system type, and size. Also remember the minor number of the new partition so that you can label any file systems on it. You should also view the output of cat /proc/partitions after parted is closed to make sure the kernel recognizes the new partition. The maximum number of partitions parted will create is 128. While the GUID Partition Table (GPT) specification allows for more partitions by growing the area reserved for the partition table, common practice used by parted is to limit it to enough area for 128 partitions. | [
"parted /dev/ sda",
"print",
"mkpart primary ext3 1024 2048"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s2-disk-storage-parted-create-part |
Configuring your Red Hat build of Quarkus applications by using a YAML file | Configuring your Red Hat build of Quarkus applications by using a YAML file Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_yaml_file/index |
Chapter 6. References | Chapter 6. References 6.1. Red Hat Configuring RHEL 8 for SAP HANA2 installation Configuring and managing high availability clusters on RHEL 8 Support Policies for RHEL High Availability Clusters Support Policies for RHEL High Availability Clusters - Fencing/STONITH Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications 6.2. SAP SAP HANA Server Installation and Update Guide SAP HANA System Replication Implementing a HA/DR Provider SAP Note 2057595 - FAQ: SAP HANA High Availability SAP Note 2063657 - SAP HANA System Replication Takeover Decision Guideline SAP Note 3007062 - FAQ: SAP HANA & Third Party Cluster Solutions 6.3. Other Be Prepared for Using Pacemaker Cluster for SAP HANA - Part 1: Basics Be Prepared for Using Pacemaker Cluster for SAP HANA - Part 2: Failure of Both Nodes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/asmb_references_automating-sap-hana-scale-up-system-replication |
Chapter 4. Support for FIPS cryptography | Chapter 4. Support for FIPS cryptography You can install an OpenShift Container Platform cluster in FIPS mode. OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL 9 computer that is configured to operate in FIPS mode, and you must use a FIPS-capable version of the installation program. See the section titled Obtaining a FIPS-capable installation program using `oc adm extract` . For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied when the machines are deployed based on the status of an option in the install-config.yaml file, which governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. Because FIPS must be enabled before the operating system that your cluster uses boots for the first time, you cannot enable FIPS after you deploy a cluster. 4.1. Obtaining a FIPS-capable installation program using oc adm extract OpenShift Container Platform requires the use of a FIPS-capable installation binary to install a cluster in FIPS mode. You can obtain this binary by extracting it from the release image by using the OpenShift CLI ( oc ). After you have obtained the binary, you proceed with the cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . Prerequisites You have installed the OpenShift CLI ( oc ) with version 4.16 or newer. Procedure Extract the FIPS-capable binary from the installation program by running the following command: USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=openshift-install-fips --to "USD{extract_dir}" USD{RELEASE_IMAGE} where: <pullsecret_file> Specifies the name of a file that contains your pull secret. <extract_dir> Specifies the directory where you want to extract the binary. <RELEASE_IMAGE> Specifies the Quay.io URL of the OpenShift Container Platform release you are using. For more information on finding the release image, see Extracting the OpenShift Container Platform installation program . Proceed with cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . Additional resources Extracting the OpenShift Container Platform installation program 4.2. Obtaining a FIPS-capable installation program using the public OpenShift mirror OpenShift Container Platform requires the use of a FIPS-capable installation binary to install a cluster in FIPS mode. You can obtain this binary by downloading it from the public OpenShift mirror. After you have obtained the binary, proceed with the cluster installation, replacing all instances of the openshift-install binary with openshift-install-fips . Prerequisites You have access to the internet. Procedure Download the installation program from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-4.16/openshift-install-rhel9-amd64.tar.gz . Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-rhel9-amd64.tar.gz Proceed with cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . 4.3. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS validated or Modules In Process modules within RHEL and RHCOS for the operating system components that it uses. See RHEL core crypto components . For example, when users use SSH to connect to OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. Table 4.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.16 Attributes Limitations FIPS support in RHEL 9 and RHCOS operating systems. The FIPS implementation does not use a function that performs hash computation and signature generation or validation in a single step. This limitation will continue to be evaluated and improved in future OpenShift Container Platform releases. FIPS support in CRI-O runtimes. FIPS support in OpenShift Container Platform services. FIPS validated or Modules In Process cryptographic module and algorithms that are obtained from RHEL 9 and RHCOS binaries and images. Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for future OpenShift Container Platform releases. FIPS support across multiple architectures. FIPS is currently only supported on OpenShift Container Platform deployments using x86_64 , ppc64le , and s390x architectures. 4.4. FIPS support in components that the cluster uses Although the OpenShift Container Platform cluster itself uses FIPS validated or Modules In Process modules, ensure that the systems that support your OpenShift Container Platform cluster use FIPS validated or Modules In Process modules for cryptography. 4.4.1. etcd To ensure that the secrets that are stored in etcd use FIPS validated or Modules In Process encryption, boot the node in FIPS mode. After you install the cluster in FIPS mode, you can encrypt the etcd data by using the FIPS-approved aes cbc cryptographic algorithm. 4.4.2. Storage For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS validated or Modules In Process encryption. You can configure your cluster to encrypt the root filesystem of each node, as described in Customizing nodes . 4.4.3. Runtimes To ensure that containers know that they are running on a host that is using FIPS validated or Modules In Process cryptography modules, use CRI-O to manage your runtimes. 4.5. Installing a cluster in FIPS mode To install a cluster in FIPS mode, follow the instructions to install a customized cluster on your preferred infrastructure. Ensure that you set fips: true in the install-config.yaml file before you deploy your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . Amazon Web Services Microsoft Azure Bare metal Google Cloud Platform IBM Cloud(R) IBM Power(R) IBM Z(R) and IBM(R) LinuxONE IBM Z(R) and IBM(R) LinuxONE with RHEL KVM Red Hat OpenStack Platform (RHOSP) VMware vSphere Note If you are using Azure File storage, you cannot enable FIPS mode. To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after you install your cluster. If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and Installing the system in FIPS mode . | [
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=openshift-install-fips --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"tar -xvf openshift-install-rhel9-amd64.tar.gz"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installation_overview/installing-fips |
Chapter 5. Configuring the web console in OpenShift Container Platform | Chapter 5. Configuring the web console in OpenShift Container Platform You can modify the OpenShift Container Platform web console to set a logout redirect URL or disable the quick start tutorials. 5.1. Prerequisites Deploy an OpenShift Container Platform cluster. 5.2. Configuring the web console You can configure the web console settings by editing the console.config.openshift.io resource. Edit the console.config.openshift.io resource: USD oc edit console.config.openshift.io cluster The following example displays the sample resource definition for the console: apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: "" 1 status: consoleURL: "" 2 1 Specify the URL of the page to load when a user logs out of the web console. If you do not specify a value, the user returns to the login page for the web console. Specifying a logoutRedirect URL allows your users to perform single logout (SLO) through the identity provider to destroy their single sign-on session. 2 The web console URL. To update this to a custom value, see Customizing the web console URL . 5.3. Disabling quick starts in the web console You can use the Administrator perspective of the web console to disable one or more quick starts. Prerequisites You have cluster administrator permissions and are logged in to the web console. Procedure In the Administrator perspective, navigate to Administation Cluster Settings . On the Cluster Settings page, click the Configuration tab. On the Configuration page, click the Console configuration resource with the description operator.openshift.io . From the Action drop-down list, select Customize , which opens the Cluster configuration page. On the General tab, in the Quick starts section, you can select items in either the Enabled or Disabled list, and move them from one list to the other by using the arrow buttons. To enable or disable a single quick start, click the quick start, then use the single arrow buttons to move the quick start to the appropriate list. To enable or disable multiple quick starts at once, press Ctrl and click the quick starts you want to move. Then, use the single arrow buttons to move the quick starts to the appropriate list. To enable or disable all quick starts at once, click the double arrow buttons to move all of the quick starts to the appropriate list. | [
"oc edit console.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/web_console/configuring-web-console |
Installing on IBM Z and IBM LinuxONE | Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.16 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_z_and_ibm_linuxone/index |
10.6. Registering Custom Authentication Plug-ins | 10.6. Registering Custom Authentication Plug-ins Custom authentication plug-in modules can be registered through the CA Console. Authentication plug-in modules can also be deleted through the CA Console. Before deleting a module, delete instances that are based on that module. Note For writing custom plug-ins, refer to the Authentication Plug-in Tutorial . Create the custom authentication class. For this example, the custom authentication plug-in is called UidPwdDirAuthenticationTestms.java . Compile the new class. Create a directory in the CA's WEB-INF web directory to hold the custom classes, so that the CA can access them for the enrollment forms. Copy the new plug-in files into the new classes directory, and set the owner to the Certificate System system user ( pkiuser ). Log into the console. Register the plug-in. In the Configuration tab, click Authentication in the navigation tree. In the right pane, click the Authentication Plug-in Registration tab. The tab lists modules that are already registered. To register a plug-in, click Register . The Register Authentication Plug-in Implementation window appears. Specify which module to register by filling in the two fields: Plugin name. The name for the module. Class name. The full name of the class for this module. This is the path to the implementing Java TM class. If this class is part of a package, include the package name. For example, to register a class named customAuth in a package named com.customplugins , the class name is com.customplugins.customAuth . After registering the module, add the module as an active authentication instance. In the Configuration tab, click Authentication in the navigation tree. In the right pane, click the Authentication Instance tab. Click Add . Select the custom module, UidPwdDirAuthenticationTestms.java , from the list to add the module. Fill in the appropriate configuration for the module. Note pkiconsole is being deprecated. Create a new end-entity enrollment form to use the new authentication module. Add the new profile to the CA's CS.cfg file. Note Back up the CS.cfg file before editing it. Restart the CA. | [
"javac -d . -classpath USDCLASSPATH UidPwdDirAuthenticationTestms.java",
"mkdir /usr/share/pki/ca/webapps/ca/WEB-INF/classes",
"cp -pr com /usr/share/pki/ca/webapps/ca/WEB-INF/classes chown -R pkiuser:pkiuser /usr/share/pki/ca/webapps/ca/WEB-INF/classes",
"pkiconsole https://server.example.com:8443/ca",
"cd /var/lib/pki/pki-tomcat/ca/profiles/ca cp -p caDirUserCert.cfg caDirUserCertTestms.cfg vi caDirUserCertTestms.cfg desc=Test ms - This certificate profile is for enrolling user certificates with directory-based authentication. visible=true enable=true enableBy=admin name=Test ms - Directory-Authenticated User Dual-Use Certificate Enrollment auth.instance_id=testms",
"vim /var/lib/pki/ instance-name /ca/conf/CS.cfg profile.list=caUserCert,caDualCert,caSignedLogCert,caTPSCert,caRARouterCert,caRouterCert,caServerCert,caOtherCert,caCACert,caInstallCACert,caRACert,caOCSPCert,caTransportCert,caDirUserCert,caAgentServerCert,caAgentFileSigning,caCMCUserCert,caFullCMCUserCert,caSimpleCMCUserCert,caTokenDeviceKeyEnrollment,caTokenUserEncryptionKeyEnrollment,caTokenUserSigningKeyEnrollment,caTempTokenDeviceKeyEnrollment,caTempTokenUserEncryptionKeyEnrollment,caTempTokenUserSigningKeyEnrollment,caAdminCert,caInternalAuthServerCert,caInternalAuthTransportCert,caInternalAuthKRAstorageCert,caInternalAuthSubsystemCert,caInternalAuthOCSPCert,DomainController, caDirUserCertTestms profile.caDirUserCertTestms.class_id=caEnrollImpl profile.caDirUserCertTestms.config=/var/lib/pki/pki-tomcat/ca/profiles/ca/caDirUserCertTestms.cfg",
"pki-server restart instance_name"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Managing_Authentication_Plug_ins |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/installing_rhel_9_for_sap_solutions/feedback_configuring-rhel-9-for-sap-hana2-installation |
Use Red Hat Quay | Use Red Hat Quay Red Hat Quay 3.12 Use Red Hat Quay Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/use_red_hat_quay/index |
11.4.2.4. Special Conditions and Actions | 11.4.2.4. Special Conditions and Actions Special characters used before Procmail recipe conditions and actions change the way they are interpreted. The following characters may be used after the * character at the beginning of a recipe's condition line: ! - In the condition line, this character inverts the condition, causing a match to occur only if the condition does not match the message. < - Checks if the message is under a specified number of bytes. > - Checks if the message is over a specified number of bytes. The following characters are used to perform special actions: ! - In the action line, this character tells Procmail to forward the message to the specified email addresses. USD - Refers to a variable set earlier in the rc file. This is often used to set a common mailbox that is referred to by various recipes. | - Starts a specified program to process the message. { and } - Constructs a nesting block, used to contain additional recipes to apply to matching messages. If no special character is used at the beginning of the action line, Procmail assumes that the action line is specifying the mailbox in which to write the message. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-email-procmail-recipes-special |
Chapter 6. Preparing to perform an EUS-to-EUS update | Chapter 6. Preparing to perform an EUS-to-EUS update Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform <4.y> to <4.y+1>, and then to <4.y+2>. You cannot update from OpenShift Container Platform <4.y> to <4.y+2> directly. However, administrators who want to update between two Extended Update Support (EUS) versions can do so incurring only a single reboot of non-control plane hosts. Important EUS-to-EUS updates are only viable between even-numbered minor versions of OpenShift Container Platform. There are a number of caveats to consider when attempting an EUS-to-EUS update. EUS-to-EUS updates are only offered after updates between all versions involved have been made available in stable channels. If you encounter issues during or after upgrading to the odd-numbered minor version but before upgrading to the even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. You can do a partial update by updating the worker or custom pool nodes to accommodate the time it takes for maintenance. You can complete the update process during multiple maintenance windows by pausing at intermediate steps. However, plan to complete the entire update within 60 days. This is critical to ensure that normal cluster automation processes are completed. Until the machine config pools are unpaused and the update is complete, some features and bugs fixes in <4.y+1> and <4.y+2> of OpenShift Container Platform are not available. All the clusters might update using EUS channels for a conventional update without pools paused, but only clusters with non control-plane MachineConfigPools objects can do EUS-to-EUS update with pools paused. 6.1. EUS-to-EUS update The following procedure pauses all non-master machine config pools and performs updates from OpenShift Container Platform <4.y> to <4.y+1> to <4.y+2>, then unpauses the previously paused machine config pools. Following this procedure reduces the total update duration and the number of times worker nodes are restarted. Prerequisites Review the release notes for OpenShift Container Platform <4.y+1> and <4.y+2>. Review the release notes and product lifecycles for any layered products and Operator Lifecycle Manager (OLM) Operators. Some may require updates either before or during an EUS-to-EUS update. Ensure that you are familiar with version-specific prerequisites, such as the removal of deprecated APIs, that are required prior to updating from OpenShift Container Platform <4.y+1> to <4.y+2>. If your cluster uses in-tree vSphere volumes, update vSphere to version 7.0u3L+ or 8.0u2+. Important If you do not update vSphere to 7.0u3L+ or 8.0u2+ before initiating an OpenShift Container Platform update, known issues might occur with your cluster after the update. For more information, see Known Issues with OpenShift 4.12 to 4.13 or 4.13 to 4.14 vSphere CSI Storage Migration . 6.1.1. EUS-to-EUS update using the web console Prerequisites Verify that machine config pools are unpaused. Have access to the web console as a user with admin privileges. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of Up to date and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, click Compute MachineConfigPools and review the contents of the Update status column. Note If your machine config pools have an Updating status, please wait for this status to change to Up to date . This process could take several minutes. Set your channel to eus-<4.y+2> . To set your channel, click Administration Cluster Settings Channel . You can edit your channel by clicking on the current hyperlinked channel. Pause all worker machine pools except for the master pool. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to pause and click Pause updates . Update to version <4.y+1> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+1> updates are complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. If necessary, update your OLM Operators by using the Administrator perspective on the web console. You can find more information on how to perform these actions in "Updating installed Operators"; see "Additional resources". Update to version <4.y+2> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+2> update is complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Unpause all previously paused machine config pools. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to unpause and click Unpause updates . Important If pools are paused, the cluster is not permitted to upgrade to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that your cluster has completed the update to version <4.y+2>. You can verify that your pools have updated on the MachineConfigPools tab under the Compute page by confirming that the Update status has a value of Up to date . You can verify that your cluster has completed the update by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Additional resources Preparing for an Operator update Updating a cluster by using the web console Updating installed Operators 6.1.2. EUS-to-EUS update using the CLI Prerequisites Verify that machine config pools are unpaused. Update the OpenShift CLI ( oc ) to the target version before each update. Important It is highly discouraged to skip this prerequisite. If the OpenShift CLI ( oc ) is not updated to the target version before your update, unexpected issues may occur. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of UPDATED and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, run the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False Your current version is <4.y>, and your intended version to update is <4.y+2>. Change to the eus-<4.y+2> channel by running the following command: USD oc adm upgrade channel eus-<4.y+2> Note If you receive an error message indicating that eus-<4.y+2> is not one of the available channels, this indicates that Red Hat is still rolling out EUS version updates. This rollout process generally takes 45-90 days starting at the GA date. Pause all worker machine pools except for the master pool by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":true}}' Note You cannot pause the master pool. Update to the latest version by running the following command: USD oc adm upgrade --to-latest Example output Updating to latest version <4.y+1.z> Review the cluster version to ensure that the updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+1.z> ... Update to version <4.y+2> by running the following command: USD oc adm upgrade --to-latest Retrieve the cluster version to ensure that the <4.y+2> updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+2.z> ... To update your worker nodes to <4.y+2>, unpause all previously paused machine config pools by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":false}}' Important If pools are not unpaused, the cluster is not permitted to update to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that the update to version <4.y+2> is complete by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False Additional resources Updating installed Operators 6.1.3. EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager In addition to the EUS-to-EUS update steps mentioned for the web console and CLI, there are additional steps to consider when performing EUS-to-EUS updates for clusters with the following: Layered products Operators installed through Operator Lifecycle Manager (OLM) What is a layered product? Layered products refer to products that are made of multiple underlying products that are intended to be used together and cannot be broken into individual subscriptions. For examples of layered OpenShift Container Platform products, see Layered Offering On OpenShift . As you perform an EUS-to-EUS update for the clusters of layered products and those of Operators that have been installed through OLM, you must complete the following: Ensure that all of your Operators previously installed through OLM are updated to their latest version in their latest channel. Updating the Operators ensures that they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. For information on how to update your Operators, see "Preparing for an Operator update" in "Additional resources". Confirm the cluster version compatibility between the current and intended Operator versions. You can verify which versions your OLM Operators are compatible with by using the Red Hat OpenShift Container Platform Operator Update Information Checker . As an example, here are the steps to perform an EUS-to-EUS update from <4.y> to <4.y+2> for OpenShift Data Foundation (ODF). This can be done through the CLI or web console. For information on how to update clusters through your desired interface, see EUS-to-EUS update using the web console and "EUS-to-EUS update using the CLI" in "Additional resources". Example workflow Pause the worker machine pools. Upgrade OpenShift <4.y> OpenShift <4.y+1>. Upgrade ODF <4.y> ODF <4.y+1>. Upgrade OpenShift <4.y+1> OpenShift <4.y+2>. Upgrade to ODF <4.y+2>. Unpause the worker machine pools. Note The upgrade to ODF <4.y+2> can happen before or after worker machine pools have been unpaused. Additional resources Preparing for an Operator update EUS-to-EUS update using the web console EUS-to-EUS update using the CLI | [
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+2.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/updating_clusters/preparing-eus-eus-upgrade |
Chapter 1. Introduction | Chapter 1. Introduction Migration Toolkit for Runtimes product will be End of Life on September 30th, 2024 All customers using this product should start their transition to Migration Toolkit for Applications . Migration Toolkit for Applications is fully backwards compatible with all features and rulesets available in Migration Toolkit for Runtimes and will be maintained in the long term. 1.1. About the MTR plugin for Eclipse You can migrate and modernize applications by using the Migration Toolkit for Runtimes (MTR) plugin for Eclipse. The MTR plugin analyzes your projects using customizable rulesets, marks issues in the source code, provides guidance to fix the issues, and offers automatic code replacement, if possible. 1.2. About the Migration Toolkit for Runtimes What is the Migration Toolkit for Runtimes? The Migration Toolkit for Runtimes (MTR) is an extensible and customizable rule-based tool that simplifies the migration and modernization of Java applications. MTR examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. MTR supports many migration paths, including the following examples: Upgrading to the latest release of Red Hat JBoss Enterprise Application Platform Migrating from Oracle WebLogic or IBM WebSphere Application Server to Red Hat JBoss Enterprise Application Platform Containerizing applications and making them cloud-ready Migrating from Java Spring Boot to Quarkus Updating from Oracle JDK to OpenJDK Upgrading from OpenJDK 8 to OpenJDK 11 Upgrading from OpenJDK 11 to OpenJDK 17 Upgrading from OpenJDK 17 to OpenJDK 21 Migrating EAP Java applications to Azure Migrating Spring Boot Java applications to Azure For more information about use cases and migration paths, see the MTR for developers web page. How does the Migration Toolkit for Runtimes simplify migration? The Migration Toolkit for Runtimes looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTR generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved. How do I learn more? See the Introduction to the Migration Toolkit for Runtimes to learn more about the features, supported configurations, system requirements, and available tools in the Migration Toolkit for Runtimes. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/eclipse_plugin_guide/introduction_eclipse-code-ready-studio-guide |
4.247. python-psycopg2 | 4.247. python-psycopg2 4.247.1. RHBA-2012:0145 - python-psycopg2 bug fix and enhancement update An updated python-psycopg2 package that fixes multiple bugs and adds multiple enhancements is now available for Red Hat Enterprise Linux 6. The python-psycopg2 package provides a PostgreSQL database adapter for the Python programming language. The python-psycopg2 package has been upgraded to upstream version 2.0.14, which provides a number of bug fixes and enhancements over the version, including the fix for a memory leak in cursor handling. This update also ensures better compatibility with the PostgreSQL object-relational database management system version 8.4. (BZ# 787164 ) All users of python-psycopg2 are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-psycopg2 |
4.5.4. Configuring Redundant Ring Protocol | 4.5.4. Configuring Redundant Ring Protocol As of Red Hat Enterprise Linux 6.4, the Red Hat High Availability Add-On supports the configuration of redundant ring protocol. When using redundant ring protocol, there are a variety of considerations you must take into account, as described in Section 8.6, "Configuring Redundant Ring Protocol" . Clicking on the Redundant Ring tab displays the Redundant Ring Protocol Configuration page. This page displays all of the nodes that are currently configured for the cluster. If you are configuring a system to use redundant ring protocol, you must specify the Alternate Name for each node for the second ring. The Redundant Ring Protocol Configuration page optionally allows you to specify the Alternate Ring Multicast Address , the Alternate Ring CMAN Port , and the Alternate Ring Multicast Packet TTL for the second ring. If you specify a multicast address for the second ring, either the alternate multicast address or the alternate port must be different from the multicast address for the first ring. If you specify an alternate port, the port numbers of the first ring and the second ring must differ by at least two, since the system itself uses port and port-1 to perform operations. If you do not specify an alternate multicast address, the system will automatically use a different multicast address for the second ring. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-rrp-conga-CA |
Chapter 17. Monitoring resources | Chapter 17. Monitoring resources The following chapter details how to configure monitoring and reporting for managed systems. This includes host configuration, content views, compliance, subscriptions, registered hosts, promotions, and synchronization. 17.1. Using the Red Hat Satellite content dashboard The Red Hat Satellite content dashboard contains various widgets which provide an overview of the host configuration, content views, compliance reports, subscriptions and hosts currently registered, promotions and synchronization, and a list of the latest notifications. In the Satellite web UI, navigate to Monitor > Dashboard to access the content dashboard. The dashboard can be rearranged by clicking on a widget and dragging it to a different position. The following widgets are available: Host Configuration Status An overview of the configuration states and the number of hosts associated with it during the last reporting interval. The following table shows the descriptions of the possible configuration states. Table 17.1. Host configuration states Icon State Description Hosts that had performed modifications without error Host that successfully performed modifications during the last reporting interval. Hosts in error state Hosts on which an error was detected during the last reporting interval. Good host reports in the last 35 minutes Hosts without error that did not perform any modifications in the last 35 minutes. Hosts that had pending changes Hosts on which some resources would be applied but Puppet was configured to run in the noop mode. Out of sync hosts Hosts that were not synchronized and the report was not received during the last reporting interval. Hosts with no reports Hosts for which no reports were collected during the last reporting interval. Hosts with alerts disabled Hosts which are not being monitored. Click the particular configuration status to view hosts associated with it. Host Configuration Chart A pie chart shows the proportion of the configuration status and the percentage of all hosts associated with it. Latest Events A list of messages produced by hosts including administration information, product and subscription changes, and any errors. Monitor this section for global notifications sent to all users and to detect any unusual activity or errors. Run Distribution (last 30 minutes) A graph shows the distribution of the running Puppet agents during the last puppet interval which is 30 minutes by default. In this case, each column represents a number of reports received from clients during 3 minutes. New Hosts A list of the recently created hosts. Click the host for more details. Task Status A summary of all current tasks, grouped by their state and result. Click the number to see the list of corresponding tasks. Latest Warning/Error Tasks A list of the latest tasks that have been stopped due to a warning or error. Click a task to see more details. Discovered Hosts A list of all bare-metal hosts detected on the provisioning network by the Discovery plugin. Latest Errata A list of all errata available for hosts registered to Satellite. Content Views A list of all content views in Satellite and their publish status. Sync Overview An overview of all products or repositories enabled in Satellite and their synchronization status. All products that are in the queue for synchronization, are unsynchronized or have been previously synchronized are listed in this section. Host Subscription Status An overview of the subscriptions currently consumed by the hosts registered to Satellite. A subscription is a purchased certificate that unlocks access to software, upgrades, and security fixes for hosts. The following table shows the possible states of subscriptions. Table 17.2. Host subscription states Icon State Description Invalid Hosts that have products installed, but are not correctly subscribed. These hosts need attention immediately. Partial Hosts that have a subscription and a valid entitlement, but are not using their full entitlements. These hosts should be monitored to ensure they are configured as expected. Valid Hosts that have a valid entitlement and are using their full entitlements. Click the subscription type to view hosts associated with subscriptions of the selected type. Subscription Status An overview of the current subscription totals that shows the number of active subscriptions, the number of subscriptions that expire in the 120 days, and the number of subscriptions that have recently expired. Host Collections A list of all host collections in Satellite and their status, including the number of content hosts in each host collection. Virt-who Configuration Status An overview of the status of reports received from the virt-who daemon running on hosts in the environment. The following table shows the possible states. Table 17.3. virt-who configuration states State Description No Reports No report has been received because either an error occurred during the virt-who configuration deployment, or the configuration has not been deployed yet, or virt-who cannot connect to Satellite during the scheduled interval. No Change No report has been received because hypervisor did not detect any changes on the virtual machines, or virt-who failed to upload the reports during the scheduled interval. If you added a virtual machine but the configuration is in the No Change state, check that virt-who is running. OK The report has been received without any errors during the scheduled interval. Total Configurations A total number of virt-who configurations. Click the configuration status to see all configurations in this state. The widget also lists the three latest configurations in the No Change state under Latest Configurations Without Change . Latest Compliance Reports A list of the latest compliance reports. Each compliance report shows a number of rules passed (P), failed (F), or othered (O). Click the host for the detailed compliance report. Click the policy for more details on that policy. Compliance Reports Breakdown A pie chart shows the distribution of compliance reports according to their status. Red Hat Insights Actions Red Hat Insights is a tool embedded in Satellite that checks the environment and suggests actions you can take. The actions are divided into 4 categories: Availability, Stability, Performance, and Security. Red Hat Insights Risk Summary A table shows the distribution of the actions according to the risk levels. Risk level represents how critical the action is and how likely it is to cause an actual issue. The possible risk levels are: Low, Medium, High, and Critical. Note It is not possible to change the date format displayed in the Satellite web UI. 17.1.1. Managing tasks Red Hat Satellite keeps a complete log of all planned or performed tasks, such as repositories synchronised, errata applied, and content views published. To review the log, navigate to Monitor > Satellite Tasks > Tasks . In the Task window, you can search for specific tasks, view their status, details, and elapsed time since they started. You can also cancel and resume one or more tasks. The tasks are managed using the Dynflow engine. Remote tasks have a timeout which can be adjusted as needed. To adjust timeout settings In the Satellite web UI, navigate to Administer > Settings . Enter %_timeout in the search box and click Search . The search should return four settings, including a description. In the Value column, click the icon to a number to edit it. Enter the desired value in seconds, and click Save . Note Adjusting the %_finish_timeout values might help in case of low bandwidth. Adjusting the %_accept_timeout values might help in case of high latency. When a task is initialized, any back-end service that will be used in the task, such as Candlepin or Pulp, will be checked for correct functioning. If the check fails, you will receive an error similar to the following one: If the back-end service checking feature turns out to be causing any trouble, it can be disabled as follows. To disable checking for services In the Satellite web UI, navigate to Administer > Settings . Enter check_services_before_actions in the search box and click Search . In the Value column, click the icon to edit the value. From the drop-down menu, select false . Click Save . 17.2. Configuring RSS notifications To view Satellite event notification alerts, click the Notifications icon in the upper right of the screen. By default, the Notifications area displays RSS feed events published in the Red Hat Satellite Blog . The feed is refreshed every 12 hours and the Notifications area is updated whenever new events become available. You can configure the RSS feed notifications by changing the URL feed. The supported feed format is RSS 2.0 and Atom. For an example of the RSS 2.0 feed structure, see the Red Hat Satellite Blog feed . For an example of the Atom feed structure, see the Foreman blog feed . To configure RSS feed notifications In the Satellite web UI, navigate to Administer > Settings and select the Notifications tab. In the RSS URL row, click the edit icon in the Value column and type the required URL. In the RSS enable row, click the edit icon in the Value column to enable or disable this feature. 17.3. Monitoring Satellite Server Audit records list the changes made by all users on Satellite. This information can be used for maintenance and troubleshooting. Procedure In the Satellite web UI, navigate to Monitor > Audits to view the audit records. To obtain a list of all the audit attributes, use the following command: 17.4. Monitoring Capsule Server The following section shows how to use the Satellite web UI to find Capsule information valuable for maintenance and troubleshooting. 17.4.1. Viewing general Capsule information In the Satellite web UI, navigate to Infrastructure > Capsules to view a table of Capsule Servers registered to Satellite Server. The information contained in the table answers the following questions: Is Capsule Server running? This is indicated by a green icon in the Status column. A red icon indicates an inactive Capsule, use the service foreman-proxy restart command on Capsule Server to activate it. What services are enabled on Capsule Server? In the Features column you can verify if Capsule for example provides a DHCP service or acts as a Pulp mirror. Capsule features can be enabled during installation or configured in addition. For more information, see Installing Capsule Server . What organizations and locations is Capsule Server assigned to? A Capsule Server can be assigned to multiple organizations and locations, but only Capsules belonging to the currently selected organization are displayed. To list all Capsules, select Any Organization from the context menu in the top left corner. After changing the Capsule configuration, select Refresh from the drop-down menu in the Actions column to ensure the Capsule table is up to date. Click the Capsule name to view further details. At the Overview tab, you can find the same information as in the Capsule table. In addition, you can answer to the following questions: Which hosts are managed by Capsule Server? The number of associated hosts is displayed to the Hosts managed label. Click the number to view the details of associated hosts. How much storage space is available on Capsule Server? The amount of storage space occupied by the Pulp content in /var/lib/pulp is displayed. Also the remaining storage space available on the Capsule can be ascertained. 17.4.2. Monitoring services In the Satellite web UI, navigate to Infrastructure > Capsules and click the name of the selected Capsule. At the Services tab, you can find basic information on Capsule services, such as the list of DNS domains, or the number of Pulp workers. The appearance of the page depends on what services are enabled on Capsule Server. Services providing more detailed status information can have dedicated tabs at the Capsule page. For more information, see Section 17.4.3, "Monitoring Puppet" . 17.4.3. Monitoring Puppet In the Satellite web UI, navigate to Infrastructure > Capsules and click the name of the selected Capsule. At the Puppet tab you can find the following: A summary of Puppet events, an overview of latest Puppet runs, and the synchronization status of associated hosts at the General sub-tab. A list of Puppet environments at the Environments sub-tab. At the Puppet CA tab you can find the following: A certificate status overview and the number of autosign entries at the General sub-tab. A table of CA certificates associated with the Capsule at the Certificates sub-tab. Here you can inspect the certificate expiry data, or cancel the certificate by clicking Revoke . A list of autosign entries at the Autosign entries sub-tab. Here you can create an entry by clicking New or delete one by clicking Delete . Note The Puppet and Puppet CA tabs are available only if you have Puppet enabled in your Satellite. Additional resources For more information, see Enabling Puppet Integration with Satellite in Managing configurations using Puppet integration . | [
"There was an issue with the backend service candlepin: Connection refused - connect(2).",
"foreman-rake audits:list_attributes"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/Monitoring_Resources_admin |
Chapter 1. Kafka tuning overview | Chapter 1. Kafka tuning overview Fine-tuning the performance of your Kafka deployment involves optimizing various configuration properties according to your specific requirements. This section provides an introduction to common configuration options available for Kafka brokers, producers, and consumers. While a minimum set of configurations is necessary for Kafka to function, Kafka properties allow for extensive adjustments. Through configuration properties, you can enhance latency, throughput, and overall efficiency, ensuring that your Kafka deployment meets the demands of your applications. For effective tuning, take a methodical approach. Begin by analyzing relevant metrics to identify potential bottlenecks or areas for improvement. Adjust configuration parameters iteratively, monitoring the impact on performance metrics, and then refine your settings accordingly. For more information about Apache Kafka configuration properties, see the Apache Kafka documentation . Note The guidance provided here offers a starting point for tuning your Kafka deployment. Finding the optimal configuration depends on factors such as workload, infrastructure, and performance objectives. 1.1. Mapping properties and values How you specify configuration properties depends on the type of deployment. If you deployed Streams for Apache Kafka on OCP, you can use the Kafka resource to add configuration for Kafka brokers through the config property. With Streams for Apache Kafka on RHEL, you add the configuration to a properties file as environment variables. When you add config properties to custom resources, you use a colon (':') to map the property and value. Example configuration in a custom resource num.partitions:1 When you add the properties as environment variables, you use an equal sign ('=') to map the property and value. Example configuration as an environment variable num.partitions=1 Note Some examples in this guide may show resource configuration specifically for Streams for Apache Kafka on OpenShift. However, the properties presented are equally applicable as environment variables when using Streams for Apache Kafka on RHEL. 1.2. Tools that help with tuning The following tools help with Kafka tuning: Cruise Control generates optimization proposals that you can use to assess and implement a cluster rebalance Strimzi Quotas plugin sets limits on brokers Rack configuration spreads broker partitions across racks and allows consumers to fetch data from the nearest replica Additional resources For more information on these tools, see the following guides: Deploying and Managing Streams for Apache Kafka on OpenShift Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper | [
"num.partitions:1",
"num.partitions=1"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_tuning/con-config-tuning-intro-str |
function::user_int16 | function::user_int16 Name function::user_int16 - Retrieves a 16-bit integer value stored in user space Synopsis Arguments addr the user space address to retrieve the 16-bit integer from Description Returns the 16-bit integer value from a given user space address. Returns zero when user space data is not accessible. | [
"user_int16:long(addr:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-int16 |
21.4. The guestfish Shell | 21.4. The guestfish Shell guestfish is an interactive shell that you can use from the command line or from shell scripts to access guest virtual machine file systems. All of the functionality of the libguestfs API is available from the shell. To begin viewing or editing a virtual machine disk image, enter the following command, substituting the path to your intended disk image: --ro means that the disk image is opened read-only. This mode is always safe but does not allow write access. Only omit this option when you are certain that the guest virtual machine is not running, or the disk image is not attached to a live guest virtual machine. It is not possible to use libguestfs to edit a live guest virtual machine, and attempting to will result in irreversible disk corruption. /path/to/disk/image is the path to the disk. This can be a file, a host physical machine logical volume (such as /dev/VG/LV), or a SAN LUN (/dev/sdf3). Note libguestfs and guestfish do not require root privileges. You only need to run them as root if the disk image being accessed needs root to read or write or both. When you start guestfish interactively, it will display this prompt: At the prompt, type run to initiate the library and attach the disk image. This can take up to 30 seconds the first time it is done. Subsequent starts will complete much faster. Note libguestfs will use hardware virtualization acceleration such as KVM (if available) to speed up this process. Once the run command has been entered, other commands can be used, as the following section demonstrates. 21.4.1. Viewing File Systems with guestfish This section provides information on viewing file systems with guestfish. 21.4.1.1. Manual Listing and Viewing The list-filesystems command will list file systems found by libguestfs. This output shows a Red Hat Enterprise Linux 4 disk image: Other useful commands are list-devices , list-partitions , lvs , pvs , vfs-type and file . You can get more information and help on any command by typing help command , as shown in the following output: To view the actual contents of a file system, it must first be mounted. You can use guestfish commands such as ls , ll , cat , more , download and tar-out to view and download files and directories. Note There is no concept of a current working directory in this shell. Unlike ordinary shells, you cannot for example use the cd command to change directories. All paths must be fully qualified starting at the top with a forward slash ( / ) character. Use the Tab key to complete paths. To exit from the guestfish shell, type exit or enter Ctrl+d . 21.4.1.2. Via guestfish inspection Instead of listing and mounting file systems by hand, it is possible to let guestfish itself inspect the image and mount the file systems as they would be in the guest virtual machine. To do this, add the -i option on the command line: Because guestfish needs to start up the libguestfs back end in order to perform the inspection and mounting, the run command is not necessary when using the -i option. The -i option works for many common Linux guest virtual machines. 21.4.1.3. Accessing a guest virtual machine by name A guest virtual machine can be accessed from the command line when you specify its name as known to libvirt (in other words, as it appears in virsh list --all ). Use the -d option to access a guest virtual machine by its name, with or without the -i option: 21.4.2. Adding Files with guestfish To add a file with guestfish you need to have the complete URI. The file can be a local file or a file located on a network block device (NBD) or a remote block device (RBD). The format used for the URI should be like any of these examples. For local files, use ///: guestfish -a disk .img guestfish -a file:/// directory / disk .img guestfish -a nbd:// example.com [ : port ] guestfish -a nbd:// example.com [ : port ]/ exportname guestfish -a nbd://?socket=/ socket guestfish -a nbd:/// exportname ?socket=/ socket guestfish -a rbd:/// pool / disk guestfish -a rbd:// example.com [ : port ]/ pool / disk 21.4.3. Modifying Files with guestfish To modify files, create directories or make other changes to a guest virtual machine, first heed the warning at the beginning of this section: your guest virtual machine must be shut down . Editing or changing a running disk with guestfish will result in disk corruption. This section gives an example of editing the /boot/grub/grub.conf file. When you are sure the guest virtual machine is shut down you can omit the --ro flag in order to get write access using a command such as: Commands to edit files include edit , vi and emacs . Many commands also exist for creating files and directories, such as write , mkdir , upload and tar-in . 21.4.4. Other Actions with guestfish You can also format file systems, create partitions, create and resize LVM logical volumes and much more, with commands such as mkfs , part-add , lvresize , lvcreate , vgcreate and pvcreate . 21.4.5. Shell Scripting with guestfish Once you are familiar with using guestfish interactively, according to your needs, writing shell scripts with it may be useful. The following is a simple shell script to add a new MOTD (message of the day) to a guest: 21.4.6. Augeas and libguestfs Scripting Combining libguestfs with Augeas can help when writing scripts to manipulate Linux guest virtual machine configuration. For example, the following script uses Augeas to parse the keyboard configuration of a guest virtual machine, and to print out the layout. Note that this example only works with guest virtual machines running Red Hat Enterprise Linux: Augeas can also be used to modify configuration files. You can modify the above script to change the keyboard layout: Note the three changes between the two scripts: The --ro option has been removed in the second example, giving the ability to write to the guest virtual machine. The aug-get command has been changed to aug-set to modify the value instead of fetching it. The new value will be "gb" (including the quotes). The aug-save command is used here so Augeas will write the changes out to disk. Note More information about Augeas can be found on the website http://augeas.net . guestfish can do much more than we can cover in this introductory document. For example, creating disk images from scratch: Or copying out whole directories from a disk image: For more information see the man page guestfish(1). | [
"guestfish --ro -a /path/to/disk/image",
"guestfish --ro -a /path/to/disk/image Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs>",
"><fs> run ><fs> list-filesystems /dev/vda1: ext3 /dev/VolGroup00/LogVol00: ext3 /dev/VolGroup00/LogVol01: swap",
"><fs> help vfs-type NAME vfs-type - get the Linux VFS type corresponding to a mounted device SYNOPSIS vfs-type mountable DESCRIPTION This command gets the filesystem type corresponding to the filesystem on \"device\". For most filesystems, the result is the name of the Linux VFS module which would be used to mount this filesystem if you mounted it without specifying the filesystem type. For example a string such as \"ext3\" or \"ntfs\".",
"guestfish --ro -a /path/to/disk/image -i Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 4 (Nahant Update 8) /dev/VolGroup00/LogVol00 mounted on / /dev/vda1 mounted on /boot ><fs> ll / total 210 drwxr-xr-x. 24 root root 4096 Oct 28 09:09 . drwxr-xr-x 21 root root 4096 Nov 17 15:10 .. drwxr-xr-x. 2 root root 4096 Oct 27 22:37 bin drwxr-xr-x. 4 root root 1024 Oct 27 21:52 boot drwxr-xr-x. 4 root root 4096 Oct 27 21:21 dev drwxr-xr-x. 86 root root 12288 Oct 28 09:09 etc",
"guestfish --ro -d GuestName -i",
"guestfish -d RHEL3 -i Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 3 (Taroon Update 9) /dev/vda2 mounted on / /dev/vda1 mounted on /boot ><fs> edit /boot/grub/grub.conf",
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USDguestname\" -i <<'EOF' write /etc/motd \"Welcome to Acme Incorporated.\" chmod 0644 /etc/motd EOF",
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i --ro <<'EOF' aug-init / 0 aug-get /files/etc/sysconfig/keyboard/LAYOUT EOF",
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i <<'EOF' aug-init / 0 aug-set /files/etc/sysconfig/keyboard/LAYOUT '\"gb\"' aug-save EOF",
"guestfish -N fs",
"><fs> copy-out /home /tmp/home"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Guest_virtual_machine_disk_access_with_offline_tools-The_guestfish_shell |
2.9. Resource Management | 2.9. Resource Management numad package The numad package provides a daemon for NUMA (Non-Uniform Memory Architecture) systems that monitors NUMA characteristics. As an alternative to manual static CPU pining and memory assignment, numad provides dynamic adjustment to minimize memory latency on an ongoing basis. The package also provides an interface that can be used to query the numad daemon for the best manual placement of an application. The numad package is introduced as a Technology Preview. Package: numad-0.5-4.20120522git | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/resource_management-tp |
Operators | Operators OpenShift Container Platform 4.14 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm alpha generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF",
"bundle.core.rukpak.io/combo-tag-ref created",
"oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'",
"Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable",
"tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml",
"apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.13",
"registry.redhat.io/redhat/redhat-operator-index:v4.14",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.27 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.27",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"kind: Subscription spec: installPlanApproval: Manual 1 config: env: - name: ROLEARN value: \"<role_arn>\" 2",
"oc apply -f sub.yaml",
"oc describe packagemanifests <operator_name> -n <catalog_namespace>",
"oc describe packagemanifests quay-operator -n openshift-marketplace",
"Name: quay-operator Namespace: operator-marketplace Labels: catalog=redhat-operators catalog-namespace=openshift-marketplace hypershift.openshift.io/managed=true operatorframework.io/arch.amd64=supported operatorframework.io/os.linux=supported provider=Red Hat provider-url= Annotations: <none> API Version: packages.operators.coreos.com/v1 Kind: PackageManifest Current CSV: quay-operator.v3.7.11 Entries: Name: quay-operator.v3.7.11 Version: 3.7.11 Name: quay-operator.v3.7.10 Version: 3.7.10 Name: quay-operator.v3.7.9 Version: 3.7.9 Name: quay-operator.v3.7.8 Version: 3.7.8 Name: quay-operator.v3.7.7 Version: 3.7.7 Name: quay-operator.v3.7.6 Version: 3.7.6 Name: quay-operator.v3.7.5 Version: 3.7.5 Name: quay-operator.v3.7.4 Version: 3.7.4 Name: quay-operator.v3.7.3 Version: 3.7.3 Name: quay-operator.v3.7.2 Version: 3.7.2 Name: quay-operator.v3.7.1 Version: 3.7.1 Name: quay-operator.v3.7.0 Version: 3.7.0 Name: stable-3.7 Current CSV: quay-operator.v3.8.5 Entries: Name: quay-operator.v3.8.5 Version: 3.8.5 Name: quay-operator.v3.8.4 Version: 3.8.4 Name: quay-operator.v3.8.3 Version: 3.8.3 Name: quay-operator.v3.8.2 Version: 3.8.2 Name: quay-operator.v3.8.1 Version: 3.8.1 Name: quay-operator.v3.8.0 Version: 3.8.0 Name: stable-3.8 Default Channel: stable-3.8 Package Name: quay-operator",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: stable-3.7 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.7.10 2",
"oc apply -f sub.yaml",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"kind: Subscription spec: installPlanApproval: Manual 1 config: env: - name: ROLEARN value: \"<role_arn>\" 2",
"oc apply -f sub.yaml",
"oc describe packagemanifests <operator_name> -n <catalog_namespace>",
"oc describe packagemanifests quay-operator -n openshift-marketplace",
"Name: quay-operator Namespace: operator-marketplace Labels: catalog=redhat-operators catalog-namespace=openshift-marketplace hypershift.openshift.io/managed=true operatorframework.io/arch.amd64=supported operatorframework.io/os.linux=supported provider=Red Hat provider-url= Annotations: <none> API Version: packages.operators.coreos.com/v1 Kind: PackageManifest Current CSV: quay-operator.v3.7.11 Entries: Name: quay-operator.v3.7.11 Version: 3.7.11 Name: quay-operator.v3.7.10 Version: 3.7.10 Name: quay-operator.v3.7.9 Version: 3.7.9 Name: quay-operator.v3.7.8 Version: 3.7.8 Name: quay-operator.v3.7.7 Version: 3.7.7 Name: quay-operator.v3.7.6 Version: 3.7.6 Name: quay-operator.v3.7.5 Version: 3.7.5 Name: quay-operator.v3.7.4 Version: 3.7.4 Name: quay-operator.v3.7.3 Version: 3.7.3 Name: quay-operator.v3.7.2 Version: 3.7.2 Name: quay-operator.v3.7.1 Version: 3.7.1 Name: quay-operator.v3.7.0 Version: 3.7.0 Name: stable-3.7 Current CSV: quay-operator.v3.8.5 Entries: Name: quay-operator.v3.8.5 Version: 3.8.5 Name: quay-operator.v3.8.4 Version: 3.8.4 Name: quay-operator.v3.8.3 Version: 3.8.3 Name: quay-operator.v3.8.2 Version: 3.8.2 Name: quay-operator.v3.8.1 Version: 3.8.1 Name: quay-operator.v3.8.0 Version: 3.8.0 Name: stable-3.8 Default Channel: stable-3.8 Package Name: quay-operator",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: stable-3.7 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.7.10 2",
"oc apply -f sub.yaml",
"apiVersion: v1 kind: Namespace metadata: name: team1-operator",
"oc create -f team1-operator.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1",
"oc create -f team1-operatorgroup.yaml",
"apiVersion: v1 kind: Namespace metadata: name: global-operators",
"oc create -f global-operators.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators",
"oc create -f global-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc get csvs -n openshift",
"oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF",
"oc get events",
"LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry:v4.14 1",
". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <catalog_dir>",
"echo USD?",
"0",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml",
"--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---",
"opm validate <catalog_dir>",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/<index_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.14 --tag mirror.example.com/abc/abc-redhat-operator-index:4.14.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.14",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.14 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.14 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.14 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.14",
"opm migrate <registry_image> <fbc_directory>",
"opm generate dockerfile <fbc_directory> --binary-image registry.redhat.io/openshift4/ose-operator-registry:v4.14",
"opm index add --binary-image registry.redhat.io/openshift4/ose-operator-registry:v4.14 --from-index <your_registry_image> --bundles \"\" -t \\<your_registry_image>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest",
"apiVersion: v1 kind: Namespace metadata: labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" 1 openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline 2 name: \"<namespace_name>\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" grpcPodConfig: securityContextConfig: <security_mode> 2 image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.14 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"oc patch operatorhub cluster -p '{\"spec\": {\"disableAllDefaultSources\": true}}' --type=merge",
"grpcPodConfig: nodeSelector: custom_label: <label>",
"grpcPodConfig: priorityClassName: <priority_class>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical",
"grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" name: cluster spec: featureSet: TechPreviewNoUpgrade 1",
"apiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperator",
"oc get platformoperator service-mesh-po -o yaml",
"status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: \"2022-10-24T17:24:40Z\" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: \"True\" 1 type: Installed",
"oc get clusteroperator platform-operators-aggregated -o yaml",
"status: conditions: - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"False\" type: Progressing - lastTransitionTime: \"2022-10-24T17:43:26Z\" status: \"False\" type: Degraded - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"True\" type: Available",
"apiVersion: platform.openshift.io/v1alpha1 kind: PlatformOperator metadata: name: service-mesh-po spec: package: name: servicemeshoperator",
"oc apply -f service-mesh-po.yaml",
"error: resource mapping not found for name: \"service-mesh-po\" namespace: \"\" from \"service-mesh-po.yaml\": no matches for kind \"PlatformOperator\" in version \"platform.openshift.io/v1alpha1\" ensure CRDs are installed first",
"oc get platformoperator service-mesh-po -o yaml",
"status: activeBundleDeployment: name: service-mesh-po conditions: - lastTransitionTime: \"2022-10-24T17:24:40Z\" message: Successfully applied the service-mesh-po BundleDeployment resource reason: InstallSuccessful status: \"True\" 1 type: Installed",
"oc get clusteroperator platform-operators-aggregated -o yaml",
"status: conditions: - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"False\" type: Progressing - lastTransitionTime: \"2022-10-24T17:43:26Z\" status: \"False\" type: Degraded - lastTransitionTime: \"2022-10-24T17:43:26Z\" message: All platform operators are in a successful state reason: AsExpected status: \"True\" type: Available",
"oc get platformoperator",
"oc delete platformoperator quay-operator",
"platformoperator.platform.openshift.io \"quay-operator\" deleted",
"oc get ns quay-operator-system",
"Error from server (NotFound): namespaces \"quay-operator-system\" not found",
"oc get co platform-operators-aggregated",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE platform-operators-aggregated 4.14.0-0 True False False 70s",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.31.0-ocp\",",
"tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.31.0-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"FROM quay.io/operator-framework/ansible-operator:v1.31.0",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"collections: - - name: community.kubernetes 1 - version: \"2.0.1\" - name: operator_sdk.util - version: \"0.4.0\" + version: \"0.5.0\" 2 - name: kubernetes.core version: \"2.4.0\" - name: cloud.common",
"--- dependency: name: galaxy driver: name: delegated - lint: | - set -e - yamllint -d \"{extends: relaxed, rules: {line-length: {max: 120}}}\" . platforms: - name: cluster groups: - k8s provisioner: name: ansible - lint: | - set -e ansible-lint inventory: group_vars: all: namespace: USD{TEST_OPERATOR_NAMESPACE:-osdk-test} host_vars: localhost: ansible_python_interpreter: '{{ ansible_playbook_python }}' config_dir: USD{MOLECULE_PROJECT_DIRECTORY}/config samples_dir: USD{MOLECULE_PROJECT_DIRECTORY}/config/samples operator_image: USD{OPERATOR_IMAGE:-\"\"} operator_pull_policy: USD{OPERATOR_PULL_POLICY:-\"Always\"} kustomize: USD{KUSTOMIZE_PATH:-kustomize} env: K8S_AUTH_KUBECONFIG: USD{KUBECONFIG:-\"~/.kube/config\"} verifier: name: ansible - lint: | - set -e - ansible-lint",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"FROM quay.io/operator-framework/helm-operator:v1.31.0 1",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <operator_name>-admin subjects: - kind: ServiceAccount name: <operator_name> namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\" rules: 1 - apiGroups: - \"\" resources: - secrets verbs: - watch",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"mkdir -p USDHOME/github.com/example/memcached-operator",
"cd USDHOME/github.com/example/memcached-operator",
"operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help",
"Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch",
"// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }",
"operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v3",
"Create Resource [y/n] y Create Controller [y/n] y",
"// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )",
"// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch",
"make install run",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc project <project_name>-system",
"apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m",
"apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2",
"oc apply -f config/samples/cache_v1_memcachedbackup.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"oc delete -f config/samples/cache_v1_memcachedbackup.yaml",
"make undeploy",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"",
"operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4",
"tree",
". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files",
"public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }",
"import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }",
"@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}",
"mvn clean install",
"cat target/kubernetes/memcacheds.cache.example.com-v1.yaml",
"Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1",
"<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>",
"package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }",
"Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();",
"if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }",
"int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();",
"if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }",
"List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());",
"if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }",
"private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }",
"private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }",
"mvn clean install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"oc apply -f rbac.yaml",
"java -jar target/quarkus-app/quarkus-run.jar",
"kubectl apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f rbac.yaml",
"oc get all -n default",
"NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s",
"oc apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply credentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token",
"oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1",
"aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.31.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.31.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.31.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.31.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.31.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.31.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"operator-sdk bundle validate ./bundle --select-optional name=multiarch",
"INFO[0020] All validation tests have completed successfully",
"ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]",
"WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc create token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"docker manifest inspect <image_manifest> 1",
"{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 },",
"docker inspect <image>",
"FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1",
"PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx",
"make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux",
"Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh",
"oc -n [namespace] edit cm hw-event-proxy-operator-manager-config",
"apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> spec: packageName: <package_name> channel: <channel_name> version: <version_number>",
"oc get operator.operators.operatorframework.io",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator channel: stable-3.8 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator version: 3.8.12 1",
"oc get package <catalog_name>-<package_name> -o yaml",
"oc apply -f <extension_name>.yaml",
"oc get operator.operators.operatorframework.io <operator_name> -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"quay-example\"},\"spec\":{\"packageName\":\"quay-operator\",\"version\":\"999.99.9\"}} creationTimestamp: \"2023-10-19T18:39:37Z\" generation: 3 name: quay-example resourceVersion: \"51505\" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 999.99.9 status: conditions: - lastTransitionTime: \"2023-10-19T18:50:34Z\" message: package 'quay-operator' at version '999.99.9' not found observedGeneration: 3 reason: ResolutionFailed status: \"False\" type: Resolved - lastTransitionTime: \"2023-10-19T18:50:34Z\" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed",
"apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF",
"bundle.core.rukpak.io/combo-tag-ref created",
"oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'",
"Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable",
"tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml",
"apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.14",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.14",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.14",
"oc apply -f <catalog_name>.yaml 1",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.14",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.14",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.14",
"oc apply -f <catalog_name>.yaml 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator channel: stable-3.8 1",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator version: 3.8.12 1",
"oc get package <catalog_name>-<package_name> -o yaml",
"oc apply -f <extension_name>.yaml",
"oc get operator.operators.operatorframework.io <operator_name> -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"quay-example\"},\"spec\":{\"packageName\":\"quay-operator\",\"version\":\"999.99.9\"}} creationTimestamp: \"2023-10-19T18:39:37Z\" generation: 3 name: quay-example resourceVersion: \"51505\" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 999.99.9 status: conditions: - lastTransitionTime: \"2023-10-19T18:50:34Z\" message: package 'quay-operator' at version '999.99.9' not found observedGeneration: 3 reason: ResolutionFailed status: \"False\" type: Resolved - lastTransitionTime: \"2023-10-19T18:50:34Z\" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.14 1",
"oc apply -f redhat-operators.yaml",
"catalog.catalogd.operatorframework.io/redhat-operators created",
"oc get catalog",
"NAME AGE redhat-operators 20s",
"oc get catalogs.catalogd.operatorframework.io -o yaml",
"apiVersion: v1 items: - apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Catalog metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"catalogd.operatorframework.io/v1alpha1\",\"kind\":\"Catalog\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operators\"},\"spec\":{\"source\":{\"image\":{\"ref\":\"registry.redhat.io/redhat/redhat-operator-index:v4.14\"},\"type\":\"image\"}}} creationTimestamp: \"2023-10-16T13:30:59Z\" generation: 1 name: redhat-operators resourceVersion: \"37304\" uid: cf00c68c-4312-4e06-aa8a-299f0bbf496b spec: source: image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.14 type: image status: 1 conditions: - lastTransitionTime: \"2023-10-16T13:32:25Z\" message: successfully unpacked the catalog image \"registry.redhat.io/redhat/redhat-operator-index@sha256:bd2f1060253117a627d2f85caa1532ebae1ba63da2a46bdd99e2b2a08035033f\" 2 reason: UnpackSuccessful 3 status: \"True\" type: Unpacked phase: Unpacked 4 resolvedSource: image: ref: registry.redhat.io/redhat/redhat-operator-index@sha256:bd2f1060253117a627d2f85caa1532ebae1ba63da2a46bdd99e2b2a08035033f 5 type: image kind: List metadata: resourceVersion: \"\"",
"oc get packages",
"NAME AGE redhat-operators-3scale-operator 5m27s redhat-operators-advanced-cluster-management 5m27s redhat-operators-amq-broker-rhel8 5m27s redhat-operators-amq-online 5m27s redhat-operators-amq-streams 5m27s redhat-operators-amq7-interconnect-operator 5m27s redhat-operators-ansible-automation-platform-operator 5m27s redhat-operators-ansible-cloud-addons-operator 5m27s redhat-operators-apicast-operator 5m27s redhat-operators-aws-efs-csi-driver-operator 5m27s redhat-operators-aws-load-balancer-operator 5m27s",
"oc get package <catalog_name>-<package_name> -o yaml",
"oc get package redhat-operators-quay-operator -o yaml",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: Package metadata: creationTimestamp: \"2023-10-06T01:14:04Z\" generation: 1 labels: catalog: redhat-operators name: redhat-operators-quay-operator ownerReferences: - apiVersion: catalogd.operatorframework.io/v1alpha1 blockOwnerDeletion: true controller: true kind: Catalog name: redhat-operators uid: 403004b6-54a3-4471-8c90-63419f6a2c3e resourceVersion: \"45196\" uid: 252cfe74-936d-44fc-be5d-09a7be7e36f5 spec: catalog: name: redhat-operators channels: - entries: - name: quay-operator.v3.4.7 skips: - red-hat-quay.v3.3.4 - quay-operator.v3.4.6 - quay-operator.v3.4.5 - quay-operator.v3.4.4 - quay-operator.v3.4.3 - quay-operator.v3.4.2 - quay-operator.v3.4.1 - quay-operator.v3.4.0 name: quay-v3.4 - entries: - name: quay-operator.v3.5.7 replaces: quay-operator.v3.5.6 skipRange: '>=3.4.x <3.5.7' name: quay-v3.5 - entries: - name: quay-operator.v3.6.0 skipRange: '>=3.3.x <3.6.0' - name: quay-operator.v3.6.1 replaces: quay-operator.v3.6.0 skipRange: '>=3.3.x <3.6.1' - name: quay-operator.v3.6.10 replaces: quay-operator.v3.6.9 skipRange: '>=3.3.x <3.6.10' - name: quay-operator.v3.6.2 replaces: quay-operator.v3.6.1 skipRange: '>=3.3.x <3.6.2' - name: quay-operator.v3.6.4 replaces: quay-operator.v3.6.2 skipRange: '>=3.3.x <3.6.4' - name: quay-operator.v3.6.5 replaces: quay-operator.v3.6.4 skipRange: '>=3.3.x <3.6.5' - name: quay-operator.v3.6.6 replaces: quay-operator.v3.6.5 skipRange: '>=3.3.x <3.6.6' - name: quay-operator.v3.6.7 replaces: quay-operator.v3.6.6 skipRange: '>=3.3.x <3.6.7' - name: quay-operator.v3.6.8 replaces: quay-operator.v3.6.7 skipRange: '>=3.3.x <3.6.8' - name: quay-operator.v3.6.9 replaces: quay-operator.v3.6.8 skipRange: '>=3.3.x <3.6.9' name: stable-3.6 - entries: - name: quay-operator.v3.7.10 replaces: quay-operator.v3.7.9 skipRange: '>=3.4.x <3.7.10' - name: quay-operator.v3.7.11 replaces: quay-operator.v3.7.10 skipRange: '>=3.4.x <3.7.11' - name: quay-operator.v3.7.12 replaces: quay-operator.v3.7.11 skipRange: '>=3.4.x <3.7.12' - name: quay-operator.v3.7.13 replaces: quay-operator.v3.7.12 skipRange: '>=3.4.x <3.7.13' - name: quay-operator.v3.7.14 replaces: quay-operator.v3.7.13 skipRange: '>=3.4.x <3.7.14' name: stable-3.7 - entries: - name: quay-operator.v3.8.0 skipRange: '>=3.5.x <3.8.0' - name: quay-operator.v3.8.1 replaces: quay-operator.v3.8.0 skipRange: '>=3.5.x <3.8.1' - name: quay-operator.v3.8.10 replaces: quay-operator.v3.8.9 skipRange: '>=3.5.x <3.8.10' - name: quay-operator.v3.8.11 replaces: quay-operator.v3.8.10 skipRange: '>=3.5.x <3.8.11' - name: quay-operator.v3.8.12 replaces: quay-operator.v3.8.11 skipRange: '>=3.5.x <3.8.12' - name: quay-operator.v3.8.2 replaces: quay-operator.v3.8.1 skipRange: '>=3.5.x <3.8.2' - name: quay-operator.v3.8.3 replaces: quay-operator.v3.8.2 skipRange: '>=3.5.x <3.8.3' - name: quay-operator.v3.8.4 replaces: quay-operator.v3.8.3 skipRange: '>=3.5.x <3.8.4' - name: quay-operator.v3.8.5 replaces: quay-operator.v3.8.4 skipRange: '>=3.5.x <3.8.5' - name: quay-operator.v3.8.6 replaces: quay-operator.v3.8.5 skipRange: '>=3.5.x <3.8.6' - name: quay-operator.v3.8.7 replaces: quay-operator.v3.8.6 skipRange: '>=3.5.x <3.8.7' - name: quay-operator.v3.8.8 replaces: quay-operator.v3.8.7 skipRange: '>=3.5.x <3.8.8' - name: quay-operator.v3.8.9 replaces: quay-operator.v3.8.8 skipRange: '>=3.5.x <3.8.9' name: stable-3.8 - entries: - name: quay-operator.v3.9.0 skipRange: '>=3.6.x <3.9.0' - name: quay-operator.v3.9.1 replaces: quay-operator.v3.9.0 skipRange: '>=3.6.x <3.9.1' - name: quay-operator.v3.9.2 replaces: quay-operator.v3.9.1 skipRange: '>=3.6.x <3.9.2' name: stable-3.9 defaultChannel: stable-3.9 description: \"\" icon: data: PD94bWwgdmVyc2lvbj mediatype: image/svg+xml packageName: quay-operator status: {}",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator version: 3.8.12",
"oc apply -f test-operator.yaml",
"operator.operators.operatorframework.io/quay-example created",
"oc get operator.operators.operatorframework.io/quay-example -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"quay-example\"},\"spec\":{\"packageName\":\"quay-operator\",\"version\":\"3.8.12\"}} creationTimestamp: \"2023-10-19T18:39:37Z\" generation: 1 name: quay-example resourceVersion: \"45663\" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 3.8.12 status: conditions: - lastTransitionTime: \"2023-10-19T18:39:37Z\" message: resolved to \"registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2023-10-19T18:39:46Z\" message: installed from \"registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7\" observedGeneration: 1 reason: Success status: \"True\" type: Installed installedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7 resolvedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:bf26c7679ea1f7b47d2b362642a9234cddb9e366a89708a4ffcbaf4475788dc7",
"oc get pod -n quay-operator-system",
"NAME READY STATUS RESTARTS AGE quay-operator.v3.8.12-6677b5c98f-2kdtb 1/1 Running 0 2m28s",
"oc get package <catalog_name>-<package_name> -o yaml",
"oc get package redhat-operators-quay-operator -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: name: quay-example spec: packageName: quay-operator version: 3.9.1 1",
"oc apply -f test-operator.yaml",
"operator.operators.operatorframework.io/quay-example configured",
"oc patch operator.operators.operatorframework.io/quay-example -p '{\"spec\":{\"version\":\"3.9.1\"}}' --type=merge",
"operator.operators.operatorframework.io/quay-example patched",
"oc get operator.operators.operatorframework.io/quay-example -o yaml",
"apiVersion: operators.operatorframework.io/v1alpha1 kind: Operator metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operators.operatorframework.io/v1alpha1\",\"kind\":\"Operator\",\"metadata\":{\"annotations\":{},\"name\":\"quay-example\"},\"spec\":{\"packageName\":\"quay-operator\",\"version\":\"3.9.1\"}} creationTimestamp: \"2023-10-19T18:39:37Z\" generation: 2 name: quay-example resourceVersion: \"47423\" uid: 2558623b-8689-421c-8ed5-7b14234af166 spec: packageName: quay-operator version: 3.9.1 1 status: conditions: - lastTransitionTime: \"2023-10-19T18:39:37Z\" message: resolved to \"registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09\" observedGeneration: 2 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2023-10-19T18:39:46Z\" message: installed from \"registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09\" observedGeneration: 2 reason: Success status: \"True\" type: Installed installedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09 resolvedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09",
"oc delete operator.operators.operatorframework.io quay-example",
"operator.operators.operatorframework.io \"quay-example\" deleted",
"oc get operator.operators.operatorframework.io",
"No resources found",
"oc get ns quay-operator-system",
"Error from server (NotFound): namespaces \"quay-operator-system\" not found",
"oc delete catalog <catalog_name>",
"catalog.catalogd.operatorframework.io \"my-catalog\" deleted",
"oc get catalog",
"manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml",
"FROM scratch 1 ADD manifests /manifests",
"podman build -f plainbundle.Dockerfile -t quay.io/<organization_name>/<repository_name>:<image_tag> . 1",
"podman push quay.io/<organization_name>/<repository_name>:<image_tag>",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry:v4.14 1",
". ├── <catalog_dir> └── <catalog_dir>.Dockerfile",
"opm init <extension_name> --output json > <catalog_dir>/index.json",
"{ { \"schema\": \"olm.package\", \"name\": \"<extension_name>\", \"defaultChannel\": \"\" } }",
"{ \"schema\": \"olm.bundle\", \"name\": \"<extension_name>.v<version>\", \"package\": \"<extension_name>\", \"image\": \"quay.io/<organization_name>/<repository_name>:<image_tag>\", \"properties\": [ { \"type\": \"olm.package\", \"value\": { \"packageName\": \"<extension_name>\", \"version\": \"<bundle_version>\" } }, { \"type\": \"olm.bundle.mediatype\", \"value\": \"plain+v0\" } ] }",
"{ \"schema\": \"olm.channel\", \"name\": \"<desired_channel_name>\", \"package\": \"<extension_name>\", \"entries\": [ { \"name\": \"<extension_name>.v<version>\" } ] }",
"{ \"schema\": \"olm.package\", \"name\": \"example-extension\", \"defaultChannel\": \"preview\" } { \"schema\": \"olm.bundle\", \"name\": \"example-extension.v0.0.1\", \"package\": \"example-extension\", \"image\": \"quay.io/example-org/example-extension-bundle:v0.0.1\", \"properties\": [ { \"type\": \"olm.package\", \"value\": { \"packageName\": \"example-extension\", \"version\": \"0.0.1\" } }, { \"type\": \"olm.bundle.mediatype\", \"value\": \"plain+v0\" } ] } { \"schema\": \"olm.channel\", \"name\": \"preview\", \"package\": \"example-extension\", \"entries\": [ { \"name\": \"example-extension.v0.0.1\" } ] }",
"opm validate <catalog_dir>",
"podman build -f <catalog_dir>.Dockerfile -t quay.io/<organization_name>/<repository_name>:<image_tag> .",
"podman push quay.io/<organization_name>/<repository_name>:<image_tag>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/operators/index |
1.2.4. Adding, Renaming, and Deleting Files | 1.2.4. Adding, Renaming, and Deleting Files Adding a File or Directory To add an existing file to a Subversion repository and put it under revision control, change to the directory with its working copy and run the following command: svn add file ... Similarly, to add a directory and all files that are in it, type: svn add directory ... This schedules the files and directories for addition to the Subversion repository. To proceed and actually add this content to the repository, run the svn commit command as described in Section 1.2.6, "Committing Changes" . Example 1.7. Adding a file to a Subversion repository Imagine that the directory with your working copy of a Subversion repository has the following contents: With the exception of ChangeLog , all files and directories within this directory are already under revision control. To schedule this file for addition to the Subversion repository, type: Renaming a File or Directory To rename an existing file or directory in a Subversion repository, change to the directory with its working copy and run the following command: svn move old_name new_name This creates a duplicate of the original file or directory, schedules it for addition, and automatically deletes the original. To proceed and actually rename the content in the Subversion repository, run the svn commit command as described in Section 1.2.6, "Committing Changes" . Example 1.8. Renaming a file in a Subversion repository Imagine that the directory with your working copy of a Subversion repository has the following contents: All files in this directory are under revision control. To schedule the LICENSE file for renaming to COPYING , type: Note that svn move automatically renames the file in your working copy: Deleting a File or Directory To remove a file from a Subversion repository, change to the directory with its working copy and run the following command: svn delete file ... Similarly, to remove a directory and all files that are in it, type: svn delete directory ... This schedules the files and directories for removal from the Subversion repository. To proceed and actually remove this content from the repository, run the svn commit command as described in Section 1.2.6, "Committing Changes" . Example 1.9. Deleting a file from a Subversion repository Imagine that the directory with your working copy of a Subversion repository has the following contents: All files in this directory are under revision control. To schedule the TODO file for removal from the SVN repository, type: Note that svn delete automatically deletes the file from your working copy: | [
"project]USD ls AUTHORS ChangeLog doc INSTALL LICENSE Makefile README src TODO",
"project]USD svn add ChangeLog A ChangeLog",
"project]USD ls AUTHORS ChangeLog doc INSTALL LICENSE Makefile README src TODO",
"project]USD svn move LICENSE COPYING A COPYING D LICENSE",
"project]USD ls AUTHORS ChangeLog COPYING doc INSTALL Makefile README src TODO",
"project]USD ls AUTHORS ChangeLog COPYING doc INSTALL Makefile README src TODO",
"project]USD svn delete TODO D TODO",
"project]USD ls AUTHORS ChangeLog COPYING doc INSTALL Makefile README src"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/sect-revision_control_systems-svn-file |
Chapter 7. Subscriptions | Chapter 7. Subscriptions 7.1. Creating subscriptions After you have created a channel and an event sink, you can create a subscription to enable event delivery. Subscriptions are created by configuring a Subscription object, which specifies the channel and the sink (also known as a subscriber ) to deliver events to. 7.1.1. Creating a subscription by using the Administrator perspective After you have created a channel and an event sink, also known as a subscriber , you can create a subscription to enable event delivery. Subscriptions are created by configuring a Subscription object, which specifies the channel and the subscriber to deliver events to. You can also specify some subscriber-specific options, such as how to handle failures. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have created a Knative channel. You have created a Knative service to use as a subscriber. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Channel tab, select the Options menu for the channel that you want to add a subscription to. Click Add Subscription in the list. In the Add Subscription dialogue box, select a Subscriber for the subscription. The subscriber is the Knative service that receives events from the channel. Click Add . 7.1.2. Creating a subscription by using the Developer perspective After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a subscription. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console. You have created an event sink, such as a Knative service, and a channel. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to the Topology page. Create a subscription using one of the following methods: Hover over the channel that you want to create a subscription for, and drag the arrow. The Add Subscription option is displayed. Select your sink in the Subscriber list. Click Add . If the service is available in the Topology view under the same namespace or project as the channel, click on the channel that you want to create a subscription for, and drag the arrow directly to a service to immediately create a subscription from the channel to that service. Verification After the subscription has been created, you can see it represented as a line that connects the channel to the service in the Topology view: 7.1.3. Creating a subscription by using YAML After you have created a channel and an event sink, you can create a subscription to enable event delivery. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe subscriptions declaratively and in a reproducible manner. To create a subscription by using YAML, you must create a YAML file that defines a Subscription object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Subscription object: Create a YAML file and copy the following sample code into it: apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 Name of the subscription. 2 Configuration settings for the channel that the subscription connects to. 3 Configuration settings for event delivery. This tells the subscription what happens to events that cannot be delivered to the subscriber. When this is configured, events that failed to be consumed are sent to the deadLetterSink . The event is dropped, no re-delivery of the event is attempted, and an error is logged in the system. The deadLetterSink value must be a Destination . 4 Configuration settings for the subscriber. This is the event sink that events are delivered to from the channel. Apply the YAML file: USD oc apply -f <filename> 7.1.4. Creating a subscription by using the Knative CLI After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the Knative ( kn ) CLI to create subscriptions provides a more streamlined and intuitive user interface than modifying YAML files directly. You can use the kn subscription create command with the appropriate flags to create a subscription. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a subscription to connect a sink to a channel: USD kn subscription create <subscription_name> \ --channel <group:version:kind>:<channel_name> \ 1 --sink <sink_prefix>:<sink_name> \ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3 1 --channel specifies the source for cloud events that should be processed. You must provide the channel name. If you are not using the default InMemoryChannel channel that is backed by the Channel custom resource, you must prefix the channel name with the <group:version:kind> for the specified channel type. For example, this will be messaging.knative.dev:v1beta1:KafkaChannel for an Apache Kafka backed channel. 2 --sink specifies the target destination to which the event should be delivered. By default, the <sink_name> is interpreted as a Knative service of this name, in the same namespace as the subscription. You can specify the type of the sink by using one of the following prefixes: ksvc A Knative service. channel A channel that should be used as destination. Only default channel types can be referenced here. broker An Eventing broker. 3 Optional: --sink-dead-letter is an optional flag that can be used to specify a sink which events should be sent to in cases where events fail to be delivered. For more information, see the OpenShift Serverless Event delivery documentation. Example command USD kn subscription create mysubscription --channel mychannel --sink ksvc:event-display Example output Subscription 'mysubscription' created in namespace 'default'. Verification To confirm that the channel is connected to the event sink, or subscriber , by a subscription, list the existing subscriptions and inspect the output: USD kn subscription list Example output NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True Deleting a subscription Delete a subscription: USD kn subscription delete <subscription_name> 7.1.5. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. 7.2. Managing subscriptions 7.2.1. Describing subscriptions by using the Knative CLI You can use the kn subscription describe command to print information about a subscription in the terminal by using the Knative ( kn ) CLI. Using the Knative CLI to describe subscriptions provides a more streamlined and intuitive user interface than viewing YAML files directly. Prerequisites You have installed the Knative ( kn ) CLI. You have created a subscription in your cluster. Procedure Describe a subscription: USD kn subscription describe <subscription_name> Example output Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min ... Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s 7.2.2. Listing subscriptions by using the Knative CLI You can use the kn subscription list command to list existing subscriptions on your cluster by using the Knative ( kn ) CLI. Using the Knative CLI to list subscriptions provides a streamlined and intuitive user interface. Prerequisites You have installed the Knative ( kn ) CLI. Procedure List subscriptions on your cluster: USD kn subscription list Example output NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True 7.2.3. Updating subscriptions by using the Knative CLI You can use the kn subscription update command as well as the appropriate flags to update a subscription from the terminal by using the Knative ( kn ) CLI. Using the Knative CLI to update subscriptions provides a more streamlined and intuitive user interface than updating YAML files directly. Prerequisites You have installed the Knative ( kn ) CLI. You have created a subscription. Procedure Update a subscription: USD kn subscription update <subscription_name> \ --sink <sink_prefix>:<sink_name> \ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2 1 --sink specifies the updated target destination to which the event should be delivered. You can specify the type of the sink by using one of the following prefixes: ksvc A Knative service. channel A channel that should be used as destination. Only default channel types can be referenced here. broker An Eventing broker. 2 Optional: --sink-dead-letter is an optional flag that can be used to specify a sink which events should be sent to in cases where events fail to be delivered. For more information, see the OpenShift Serverless Event delivery documentation. Example command USD kn subscription update mysubscription --sink ksvc:event-display | [
"apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"kn subscription create <subscription_name> --channel <group:version:kind>:<channel_name> \\ 1 --sink <sink_prefix>:<sink_name> \\ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3",
"kn subscription create mysubscription --channel mychannel --sink ksvc:event-display",
"Subscription 'mysubscription' created in namespace 'default'.",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription delete <subscription_name>",
"kn subscription describe <subscription_name>",
"Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription update <subscription_name> --sink <sink_prefix>:<sink_name> \\ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2",
"kn subscription update mysubscription --sink ksvc:event-display"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/eventing/subscriptions |
Chapter 9. Image configuration resources | Chapter 9. Image configuration resources Use the following procedure to configure image registries. 9.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . blockedRegistries : Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default OpenShift image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 9.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). When changes to the registry are applied to the image.config.openshift.io/cluster CR, the Machine Config Operator (MCO) performs the following sequential actions: Cordons the node Applies changes by restarting CRI-O Uncordons the node Note The MCO does not restart nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources parameter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. When using the allowedRegistries , blockedRegistries , or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.26.0 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.26.0 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.26.0 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.26.0 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.26.0 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.26.0 9.2.1. Adding specific registries You can add a list of registries, and optionally an individual repository within a registry, that are permitted for image pull and push actions by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the allowedRegistries parameter, the container runtime searches only those registries. Registries not in the list are blocked. Warning When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an allowed list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, to use for image pull and push actions. All other registries are blocked. Note Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it triggers a rollout on nodes in machine config pool (MCP). The allowed registries list is used to update the image signature policy in the /etc/containers/policy.json file on each node. Changes to the /etc/containers/policy.json file do not require the node to drain. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/policy.json | jq '.' The following policy indicates that only images from the example.com, quay.io, and registry.redhat.io registries are permitted for image pulls and pushes: Example 9.1. Example image signature policy file { "default":[ { "type":"reject" } ], "transports":{ "atomic":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker-daemon":{ "":[ { "type":"insecureAcceptAnything" } ] } } } Note If your cluster uses the registrySources.insecureRegistries parameter, ensure that any insecure registries are included in the allowed list. For example: spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000 9.2.2. Blocking specific registries You can block any registry, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the blockedRegistries parameter, the container runtime does not search those registries. All other registries are allowed. Warning To prevent pod failure, do not add the registry.redhat.io and quay.io registries to the blockedRegistries list, as they are required by payload images within your environment. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with a blocked list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, that should not be used for image pull and push actions. All other registries are allowed. Note Either the blockedRegistries registry or the allowedRegistries registry can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, changes to the blocked registries appear in the /etc/containers/registries.conf file on each node. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat etc/containers/registries.conf The following example indicates that images from the untrusted.com registry are prevented for image pulls and pushes: Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "untrusted.com" blocked = true 9.2.2.1. Blocking a payload registry In a mirroring configuration, you can block upstream payload registries in a disconnected environment using a ImageContentSourcePolicy (ICSP) object. The following example procedure demonstrates how to block the quay.io/openshift-payload payload registry. Procedure Create the mirror configuration using an ImageContentSourcePolicy (ICSP) object to mirror the payload to a registry in your instance. The following example ICSP file mirrors the payload internal-mirror.io/openshift-payload : apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload After the object deploys onto your nodes, verify that the mirror configuration is set by checking the /etc/containers/registries.conf file: Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" Use the following command to edit the image.config.openshift.io custom resource file: USD oc edit image.config.openshift.io cluster To block the payload registry, add the following configuration to the image.config.openshift.io custom resource file: spec: registrySources: blockedRegistries: - quay.io/openshift-payload Verification Verify that the upstream payload registry is blocked by checking the /etc/containers/registries.conf file on the node. Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" 9.2.3. Allowing insecure registries You can add insecure registries, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Registries that do not use valid SSL certificates or do not require HTTPS connections are considered insecure. Warning Insecure external registries should be avoided to reduce possible security risks. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an insecure registries list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify an insecure registry. You can specify a repository in that registry. 3 Ensure that any insecure registries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries, then drains and uncordons the nodes when it detects changes. After the nodes return to the Ready state, changes to the insecure and blocked registries appear in the /etc/containers/registries.conf file on each node. Verification To check that the registries have been added to the policy file, use the following command on a node: USD cat /etc/containers/registries.conf The following example indicates that images from the insecure.com registry is insecure and is allowed for image pulls and pushes. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "insecure.com" insecure = true 9.2.4. Adding registries that allow image short names You can add registries to search for an image short name by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. An image short name enables you to search for images without including the fully qualified domain name in the pull spec. For example, you could use rhel7/etcd instead of registry.access.redhat.com/rhe7/etcd . You might use short names in situations where using the full path is not practical. For example, if your cluster references multiple internal registries whose DNS changes frequently, you would need to update the fully qualified domain names in your pull specs with each change. In this case, using an image short name might be beneficial. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the containerRuntimeSearchRegistries parameter, when pulling an image with a short name, the container runtime searches those registries. Warning Using image short names with public registries is strongly discouraged because the image might not deploy if the public registry requires authentication. Use fully-qualified image names with public registries. Red Hat internal or private registries typically support the use of image short names. If you list public registries under the containerRuntimeSearchRegistries parameter (including the registry.redhat.io , docker.io , and quay.io registries), you expose your credentials to all the registries on the list, and you risk network and registry attacks. Because you can only have one pull secret for pulling images, as defined by the global pull secret, that secret is used to authenticate against every registry in that list. Therefore, if you include public registries in the list, you introduce a security risk. You cannot list multiple public registries under the containerRuntimeSearchRegistries parameter if each public registry requires different credentials and a cluster does not list the public registry in the global pull secret. For a public registry that requires authentication, you can use an image short name only if the registry has its credentials stored in the global pull secret. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, if the containerRuntimeSearchRegistries parameter is added, the MCO creates a file in the /etc/containers/registries.conf.d directory on each node with the listed registries. The file overrides the default list of unqualified search registries in the /etc/containers/registries.conf file. There is no way to fall back to the default list of unqualified search registries. The containerRuntimeSearchRegistries parameter works only with the Podman and CRI-O container engines. The registries in the list can be used only in pod specs, not in builds and image streams. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 ... status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Specify registries to use with image short names. You should use image short names with only internal or private registries to reduce possible security risks. 2 Ensure that any registries listed under containerRuntimeSearchRegistries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use this parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf Example output unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io'] 9.2.5. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 9.3. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored registry by using digest specifications. The ICSP CR always falls back to the source registry if the mirrors do not work. Important Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. For more information, see "Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring" in the following section. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and ITMS is recommended. If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are supported. Workloads using ICSP objects continue to function as expected. However, if you want to take advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to IDMS objects by using the oc adm migrate icsp command as shown in the Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows. Migrating to IDMS objects does not require a cluster reboot. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Additional resources For more information about global pull secrets, see Updating the global cluster pull secret . 9.3.1. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create a postinstallation mirror configuration CR, by using one of the following examples: Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.redhat.io 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use another mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in image pull specifications. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. 8 Optional: Indicates a namespace inside a registry, which allows you to use any image in that namespace. If you use a registry domain as a source, the object is applied to all repositories from the registry. 9 Optional: Indicates a registry, which allows you to use any image in that registry. If you specify a registry name, the object is applied to all repositories from a source registry to a mirror registry. 10 Pulls the image registry.example.com/example/myimage@sha256:... from the mirror mirror.example.net/image@sha256:.. . 11 Pulls the image registry.example.com/example/image@sha256:... in the source registry namespace from the mirror mirror.example.net/image@sha256:... . 12 Pulls the image registry.example.com/myimage@sha256 from the mirror registry example.net/registry-example-com/myimage@sha256:... . Create an ImageContentSourcePolicy custom resource, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the new object: USD oc create -f registryrepomirror.yaml After the object is created, the Machine Config Operator (MCO) drains the nodes for ImageTagMirrorSet objects only. The MCO does not drain the nodes for ImageDigestMirrorSet and ImageContentSourcePolicy objects. To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf The following output represents a registries.conf file where postinstallation mirror configuration CRs were applied. The final two entries are marked digest-only and tag-only respectively. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" 1 [[registry.mirror]] location = "example.io/example/ubi-minimal" 2 pull-from-mirror = "digest-only" 3 [[registry.mirror]] location = "example.com/example/ubi-minimal" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.net/registry-example-com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example" [[registry.mirror]] location = "mirror.example.net" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example/myimage" [[registry.mirror]] location = "mirror.example.net/image" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.redhat.io" [[registry.mirror]] location = "mirror.example.com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.redhat.io/openshift4" [[registry.mirror]] location = "mirror.example.com/redhat" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" blocked = true 4 [[registry.mirror]] location = "example.io/example/ubi-minimal-tag" pull-from-mirror = "tag-only" 5 1 Indicates the repository that is referred to in a pull spec. 2 Indicates the mirror for that repository. 3 Indicates that the image pull from the mirror is a digest reference image. 4 Indicates that the NeverContactSource parameter is set for this repository. 5 Indicates that the image pull from the mirror is a tag reference image. Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 9.3.2. Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. This functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. ICSP objects are being replaced by ImageDigestMirrorSet and ImageTagMirrorSet objects to configure repository mirroring. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version, changes the kind value to ImageDigestMirrorSet , and changes spec.repositoryDigestMirrors to spec.imageDigestMirrors . The rest of the file is not changed. Because the migration does not change the registries.conf file, the cluster does not need to reboot. For more information about ImageDigestMirrorSet or ImageTagMirrorSet objects, see "Configuring image registry repository mirroring" in the section. Prerequisites Access to the cluster as a user with the cluster-admin role. Ensure that you have ImageContentSourcePolicy objects on your cluster. Procedure Use the following command to convert one or more ImageContentSourcePolicy YAML files to an ImageDigestMirrorSet YAML file: USD oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory> where: <file_name> Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple file names. --dest-dir Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file is written to the current directory. For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the new YAML files to the idms-files directory. USD oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files Example output wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml Create the CR object by running the following command: USD oc create -f <path_to_the_directory>/<file-name>.yaml where: <path_to_the_directory> Specifies the path to the directory, if you used the --dest-dir flag. <file_name> Specifies the name of the ImageDigestMirrorSet YAML. Remove the ICSP objects after the IDMS objects are rolled out. | [
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.26.0 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.26.0 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.26.0 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.26.0 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.26.0 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.26.0",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/policy.json | jq '.'",
"{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }",
"spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io cluster",
"spec: registrySources: blockedRegistries: - quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf",
"unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.redhat.io 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/images/image-configuration |
Chapter 6. PersistentVolumeClaim [v1] | Chapter 6. PersistentVolumeClaim [v1] Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. 6.1.1. .spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 6.1.2. .spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.3. .spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 6.1.4. .spec.resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.5. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 6.1.6. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 6.1.7. .status Description PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources object (Quantity) allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound 6.1.8. .status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 6.1.9. .status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string 6.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumeclaims GET : list or watch objects of kind PersistentVolumeClaim /api/v1/watch/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims DELETE : delete collection of PersistentVolumeClaim GET : list or watch objects of kind PersistentVolumeClaim POST : create a PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} DELETE : delete a PersistentVolumeClaim GET : read the specified PersistentVolumeClaim PATCH : partially update the specified PersistentVolumeClaim PUT : replace the specified PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} GET : watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status GET : read status of the specified PersistentVolumeClaim PATCH : partially update status of the specified PersistentVolumeClaim PUT : replace status of the specified PersistentVolumeClaim 6.2.1. /api/v1/persistentvolumeclaims HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.1. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty 6.2.2. /api/v1/watch/persistentvolumeclaims HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /api/v1/namespaces/{namespace}/persistentvolumeclaims HTTP method DELETE Description delete collection of PersistentVolumeClaim Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.5. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolumeClaim Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.4. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.10. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method DELETE Description delete a PersistentVolumeClaim Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.12. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolumeClaim Table 6.13. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolumeClaim Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolumeClaim Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.6. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.19. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method GET Description watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.7. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status Table 6.21. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method GET Description read status of the specified PersistentVolumeClaim Table 6.22. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolumeClaim Table 6.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.24. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolumeClaim Table 6.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.26. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.27. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/persistentvolumeclaim-v1 |
Chapter 6. Removed functionalities | Chapter 6. Removed functionalities None. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/release_notes_and_known_issues/removed-functionalities |
Configuring the Compute Service for Instance Creation | Configuring the Compute Service for Instance Creation Red Hat OpenStack Platform 17.0 A guide to configuring and managing the Red Hat OpenStack Platform Compute (nova) service for creating instances OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/index |
Administration Guide | Administration Guide Red Hat CodeReady Workspaces 2.15 Administering Red Hat CodeReady Workspaces 2.15 Robert Kratky [email protected] Fabrice Flore-Thebault [email protected] Jana Vrbkova [email protected] Max Leonov [email protected] Red Hat Developer Group Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/administration_guide/index |
Chapter 9. Message delivery | Chapter 9. Message delivery 9.1. Sending messages To send a message, override the on_sendable event handler and call the sender::send() method. The sendable event fires when the proton::sender has enough credit to send at least one message. Example: Sending messages struct example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect("amqp://example.com"); conn.open_sender("jobs"); } void on_sendable(proton::sender& snd) override { proton::message msg {"job-1"}; snd.send(msg); } }; 9.2. Tracking sent messages When a message is sent, the sender can keep a reference to the tracker object representing the transfer. The receiver accepts or rejects each message that is delivered. The sender is notified of the outcome for each tracked delivery. To monitor the outcome of a sent message, override the on_tracker_accept and on_tracker_reject event handlers and map the delivery state update to the tracker returned from send() . Example: Tracking sent messages void on_sendable(proton::sender& snd) override { proton::message msg {"job-1"}; proton::tracker trk = snd.send(msg); } void on_tracker_accept(proton::tracker& trk) override { std::cout << "Delivery for " << trk << " is accepted\n"; } void on_tracker_reject(proton::tracker& trk) override { std::cout << "Delivery for " << trk << " is rejected\n"; } 9.3. Receiving messages To receive messages, create a receiver and override the on_message event handler. Example: Receiving messages struct example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect("amqp://example.com"); conn.open_receiver("jobs") ; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << "Received message '" << msg.body() << "'\n"; } }; 9.4. Acknowledging received messages To explicitly accept or reject a delivery, use the delivery::accept() or delivery::reject() methods in the on_message event handler. Example: Acknowledging received messages void on_message(proton::delivery& dlv, proton::message& msg) override { try { process_message(msg); dlv.accept(); } catch (std::exception& e) { dlv.reject(); } } By default, if you do not explicity acknowledge a delivery, then the library accepts it after on_message returns. To disable this behavior, set the auto_accept receiver option to false. | [
"struct example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect(\"amqp://example.com\"); conn.open_sender(\"jobs\"); } void on_sendable(proton::sender& snd) override { proton::message msg {\"job-1\"}; snd.send(msg); } };",
"void on_sendable(proton::sender& snd) override { proton::message msg {\"job-1\"}; proton::tracker trk = snd.send(msg); } void on_tracker_accept(proton::tracker& trk) override { std::cout << \"Delivery for \" << trk << \" is accepted\\n\"; } void on_tracker_reject(proton::tracker& trk) override { std::cout << \"Delivery for \" << trk << \" is rejected\\n\"; }",
"struct example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect(\"amqp://example.com\"); conn.open_receiver(\"jobs\") ; } void on_message(proton::delivery& dlv, proton::message& msg) override { std::cout << \"Received message '\" << msg.body() << \"'\\n\"; } };",
"void on_message(proton::delivery& dlv, proton::message& msg) override { try { process_message(msg); dlv.accept(); } catch (std::exception& e) { dlv.reject(); } }"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/message_delivery |
Chapter 1. OpenShift Container Platform CLI tools overview | Chapter 1. OpenShift Container Platform CLI tools overview A user performs a range of operations while working on OpenShift Container Platform such as the following: Managing clusters Building, deploying, and managing applications Managing deployment processes Developing Operators Creating and maintaining Operator catalogs OpenShift Container Platform offers a set of command-line interface (CLI) tools that simplify these tasks by enabling users to perform various administration and development operations from the terminal. These tools expose simple commands to manage the applications, as well as interact with each component of the system. 1.1. List of CLI tools The following set of CLI tools are available in OpenShift Container Platform: OpenShift CLI ( oc ) : This is the most commonly used CLI tool by OpenShift Container Platform users. It helps both cluster administrators and developers to perform end-to-end operations across OpenShift Container Platform using the terminal. Unlike the web console, it allows the user to work directly with the project source code using command scripts. Knative CLI (kn) : The Knative ( kn ) CLI tool provides simple and intuitive terminal commands that can be used to interact with OpenShift Serverless components, such as Knative Serving and Eventing. Pipelines CLI (tkn) : OpenShift Pipelines is a continuous integration and continuous delivery (CI/CD) solution in OpenShift Container Platform, which internally uses Tekton. The tkn CLI tool provides simple and intuitive commands to interact with OpenShift Pipelines using the terminal. opm CLI : The opm CLI tool helps the Operator developers and cluster administrators to create and maintain the catalogs of Operators from the terminal. Operator SDK : The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cli_tools/cli-tools-overview |
21.4. Roles Permits Sub-Collection | 21.4. Roles Permits Sub-Collection 21.4.1. Roles Permits Sub-Collection Each role contains a set of allowable actions, or permits , which the API lists in capabilities . A role's permits are listed as a sub-collection: Example 21.5. Listing a role's permits 21.4.2. Assign a Permit to a Role Assign a permit to a role with a POST request to the permits sub-collection. Use either an id attribute or a name element to specify the permit to assign. Example 21.6. Assign a permit to a role 21.4.3. Remove a Permit from a Role Remove a permit from a role with a DELETE request to the permit resource. Example 21.7. Remove a permit from a role | [
"GET /ovirt-engine/api/roles/b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9/permits HTTP/1.1 Accept: application/xml HTTP/1.1 200 OK Content-Type: application/xml <permits> <permit id=\"1\" href=\"/ovirt-engine/api/roles/b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9/permits/1\"> <name>create_vm</name> <administrative>false</administrative> <role id=\"b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9\" href=\"/ovirt-engine/api/roles/b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9\"/> </permit> </permits>",
"POST /ovirt-engine/api/roles/b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9/permits HTTP/1.1 Accept: application/xml Content-Type: application/xml <permit id=\"1\"/> HTTP/1.1 201 Created Content-Type: application/xml <permits> <permit id=\"1\" href=\"/ovirt-engine/api/roles/b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9/permits/1\"> <name>create_vm</name> <administrative>false</administrative> <role id=\"b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9\" href=\"/ovirt-engine/api/roles/b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9\"/> </permit> </permits>",
"DELETE /ovirt-engine/api/roles/b67dfbe2-0dbc-41e4-86d3-a2fbef02cfa9/permits/1 HTTP/1.1 HTTP/1.1 204 No Content"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-roles_permits_sub-collection |
Monitoring | Monitoring OpenShift Container Platform 4.9 Configuring and using the monitoring stack in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc -n openshift-monitoring get configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |",
"oc apply -f cluster-monitoring-config.yaml",
"oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f user-workload-monitoring-config.yaml",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: <configuration_for_the_component>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: 1 volumeClaimTemplate: spec: storageClassName: fast volumeMode: Filesystem resources: requests: storage: 40Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: <configuration_for_the_component>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: 1 retention: 24h 2 resources: requests: cpu: 200m 3 memory: 2Gi 4",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: nodeSelector: <node_key>: <node_value> <node_key>: <node_value> <...>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusOperator: nodeSelector: nodename: controlplane1 prometheusK8s: nodeSelector: nodename: worker1 nodename: worker2 alertmanagerMain: nodeSelector: nodename: worker1 nodename: worker2 kubeStateMetrics: nodeSelector: nodename: worker1 grafana: nodeSelector: nodename: worker1 telemeterClient: nodeSelector: nodename: worker1 k8sPrometheusAdapter: nodeSelector: nodename: worker1 nodename: worker2 openshiftStateMetrics: nodeSelector: nodename: worker1 thanosQuerier: nodeSelector: nodename: worker1 nodename: worker2",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: nodeSelector: <node_key>: <node_value> <node_key>: <node_value> <...>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: nodeSelector: nodename: worker1 prometheus: nodeSelector: nodename: worker1 nodename: worker2 thanosRuler: nodeSelector: nodename: worker1 nodename: worker2",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 20Gi",
"for p in USD(oc -n openshift-monitoring get pvc -l app.kubernetes.io/name=prometheus -o jsonpath='{range .items[*]}{.metadata.name} {end}'); do oc -n openshift-monitoring patch pvc/USD{p} --patch '{\"spec\": {\"resources\": {\"requests\": {\"storage\":\"100Gi\"}}}}'; done",
"oc delete statefulset -l app.kubernetes.io/name=prometheus --cascade=orphan",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials>",
"basicAuth: username: <usernameSecret> password: <passwordSecret>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" basicAuth: username: name: remoteWriteAuth key: user password: name: remoteWriteAuth key: password",
"tlsConfig: ca: <caSecret> cert: <certSecret> keySecret: <keySecret>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" tlsConfig: ca: secret: name: selfsigned-mtls-bundle key: ca.crt cert: secret: name: selfsigned-mtls-bundle key: client.crt keySecret: name: selfsigned-mtls-bundle key: client.key",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials> <write_relabel_configs>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials> <write_relabel_configs>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11",
"oc apply -f monitoring-stack-alerts.yaml",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | grafana: enabled: false",
"oc -n openshift-monitoring get pods",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc -n openshift-user-workload-monitoring get pod",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h",
"oc policy add-role-to-user <role> <user> -n <namespace> 1",
"oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring",
"SECRET=`oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print USD1 }'`",
"TOKEN=`echo USD(oc get secret USDSECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d`",
"THANOS_QUERIER_HOST=`oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host'`",
"NAMESPACE=ns1",
"curl -X GET -kG \"https://USDTHANOS_QUERIER_HOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\" -H \"Authorization: Bearer USDTOKEN\"",
"{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[{\"metric\":{\"__name__\":\"up\",\"endpoint\":\"web\",\"instance\":\"10.129.0.46:8080\",\"job\":\"prometheus-example-app\",\"namespace\":\"ns1\",\"pod\":\"prometheus-example-app-68d47c4fb6-jztp2\",\"service\":\"prometheus-example-app\"},\"value\":[1591881154.748,\"1\"]}]}}",
"oc label namespace my-project 'openshift.io/user-monitoring=false'",
"oc label namespace my-project 'openshift.io/user-monitoring-'",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false",
"oc -n openshift-user-workload-monitoring get pod",
"No resources found in openshift-user-workload-monitoring project.",
"curl http://<example_app_endpoint>/metrics",
"HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.1 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-example-monitor name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n ns1 get servicemonitor",
"NAME AGE prometheus-example-monitor 81m",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0",
"oc apply -f example-app-alerting-rule.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 labels: openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0",
"oc apply -f example-app-alerting-rule.yaml",
"oc -n <project> get prometheusrule",
"oc -n <project> get prometheusrule <rule> -o yaml",
"oc -n <namespace> delete prometheusrule <foo>",
"oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 5m receiver: watchdog - matchers: - \"service=<your_service>\" 1 routes: - matchers: - <your_matching_rules> 2 receiver: <receiver> 3 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration>",
"global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 5m receiver: watchdog - matchers: - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \" your-key \"",
"oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-",
"oc -n openshift-monitoring get routes",
"NAME HOST/PORT alertmanager-main alertmanager-main-openshift-monitoring.apps._url_.openshift.com grafana grafana-openshift-monitoring.apps._url_.openshift.com prometheus-k8s prometheus-k8s-openshift-monitoring.apps._url_.openshift.com thanos-querier thanos-querier-openshift-monitoring.apps._url_.openshift.com",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10,count by (job)({__name__=~\".+\"}))"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/monitoring/modifying-retention-time-for-prometheus-metrics-data_configuring-the-monitoring-stack |
4.6. Transaction Wrapping Modes | 4.6. Transaction Wrapping Modes You can set your transaction wrapping to one of the following modes: ON This mode always wraps every command in a transaction without checking whether it is required. This is the safest mode. OFF This mode never automatically wraps a command in a transaction or checks whether it needs to wrap a command. This mode can be dangerous as it will allow multiple source updates outside of a transaction without an error. This mode has best performance for applications that do not use updates or transactions. DETECT This mode assumes that the user does not know how to execute multiple source updates in a transaction. JBoss Data Virtualization checks every command to see whether it is a multiple source update and wraps it in a transaction. If it is single source then it uses the source level command transaction. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/transaction_wrapping_modes1 |
Getting started with activation keys on the Hybrid Cloud Console | Getting started with activation keys on the Hybrid Cloud Console Subscription Central 1-latest Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_activation_keys_on_the_hybrid_cloud_console/index |
Monitoring and logging | Monitoring and logging Red Hat Developer Hub 1.4 Tracking performance and capturing insights with monitoring and logging tools in Red Hat Developer Hub Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/monitoring_and_logging/index |
Chapter 6. I/O Scheduling | Chapter 6. I/O Scheduling You can use an input/output (I/O) scheduler to improve disk performance both when Red Hat Enterprise Linux 7 is a virtualization host as well as when it is a virtualization guest . 6.1. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Host When using Red Hat Enterprise Linux 7 as a host for virtualized guests, the default deadline scheduler is usually ideal. This scheduler performs well on nearly all workloads. However, if maximizing I/O throughput is more important than minimizing I/O latency on the guest workloads, it may be beneficial to use the cfq scheduler instead. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-Virtualization_Tuning_Optimization_Guide-Disk_IO_Scheduler |
Chapter 19. Using the system health dashboard | Chapter 19. Using the system health dashboard The Red Hat Advanced Cluster Security for Kubernetes system health dashboard provides a single interface for viewing health related information about Red Hat Advanced Cluster Security for Kubernetes components. Note The system health dashboard is only available on Red Hat Advanced Cluster Security for Kubernetes 3.0.53 and newer. 19.1. System health dashboard details To access the health dashboard: In the RHACS portal, go to Platform Configuration System Health . The health dashboard organizes information in the following groups: Cluster Health - Shows the overall state of Red Hat Advanced Cluster Security for Kubernetes cluster. Vulnerability Definitions - Shows the last update time of vulnerability definitions. Image Integrations - Shows the health of all registries that you have integrated. Notifier Integrations - Shows the health of any notifiers (Slack, email, Jira, or other similar integrations) that you have integrated. Backup Integrations - Shows the health of any backup providers that you have integrated. The dashboard lists the following states for different components: Healthy - The component is functional. Degraded - The component is partially unhealthy. This state means the cluster is functional, but some components are unhealthy and require attention. Unhealthy - This component is not healthy and requires immediate attention. Uninitialized - The component has not yet reported back to Central to have its health assessed. An uninitialized state may sometimes require attention, but often components report back the health status after a few minutes or when the integration is used. Cluster health section The Cluster Overview shows information about your Red Hat Advanced Cluster Security for Kubernetes cluster health. It reports the health information about the following: Collector Status - It shows whether the Collector pod that Red Hat Advanced Cluster Security for Kubernetes uses is reporting healthy. Sensor Status - It shows whether the Sensor pod that Red Hat Advanced Cluster Security for Kubernetes uses is reporting healthy. Sensor Upgrade - It shows whether the Sensor is running the correct version when compared with Central. Credential Expiration - It shows if the credentials for Red Hat Advanced Cluster Security for Kubernetes are nearing expiration. Note Clusters in the Uninitialized state are not reported in the number of clusters secured by Red Hat Advanced Cluster Security for Kubernetes until they check in. Vulnerabilities definition section The Vulnerabilities Definition section shows the last time vulnerability definitions were updated and if the definitions are up to date. Integrations section There are 3 integration sections Image Integrations , Notifier Integrations , and Backup Integrations . Similar to the Cluster Health section, these sections list the number of unhealthy integrations if they exist. Otherwise, all integrations report as healthy. Note The Integrations section lists the healthy integrations as 0 if any of the following conditions are met: You have not integrated Red Hat Advanced Cluster Security for Kubernetes with any third-party tools. You have integrated with some tools, but disabled the integrations, or have not set up any policy violations. 19.2. Viewing product usage data RHACS provides product usage data for the number of secured Kubernetes nodes and CPU units for secured clusters based on metrics collected from RHACS sensors. This information can be useful to estimate RHACS consumption data for reporting. For more information on how CPU units are defined in Kubernetes, see CPU resource units . Note OpenShift Container Platform provides its own usage reports; this information is intended for use with self-managed Kubernetes systems. RHACS provides the following usage data in the web portal and API: Currently secured CPU units: The number of Kubernetes CPU units used by your RHACS secured clusters, as of the latest metrics collection. Currently secured node count: The number of Kubernetes nodes secured by RHACS, as of the latest metrics collection. Maximum secured CPU units: The maximum number of CPU units used by your RHACS secured clusters, as measured hourly and aggregated for the time period defined by the Start date and End date . Maximum secured node count: The maximum number of Kubernetes nodes secured by RHACS, as measured hourly and aggregated for the time period defined by the Start date and End date . CPU units observation date: The date on which the maximum secured CPU units data was collected. Node count observation date: The date on which the maximum secured node count data was collected. The sensors collect data every 5 minutes, so there can be a short delay in displaying the current data. To view historical data, you must configure the Start date and End date and download the data file. The date range is inclusive and depends on your time zone. The presented maximum values are computed based on hourly maximums for the requested period. The hourly maximums are available for download in CSV format. Note The data shown is not sent to Red Hat or displayed as Prometheus metrics. Procedure In the RHACS portal, go to Platform Configuration System Health . Click Show product usage . In the Start date and End date fields, choose the dates for which you want to display data. This range is inclusive and depends on your time zone. Optional: To download the detailed data, click Download CSV . You can also obtain this data by using the ProductUsageService API object. For more information, go to Help API reference in the RHACS portal. 19.3. Generating a diagnostic bundle by using the RHACS portal You can generate a diagnostic bundle by using the system health dashboard in the RHACS portal. Prerequisites To generate a diagnostic bundle, you need read permission for the Administration resource. Procedure In the RHACS portal, select Platform Configuration System Health . On the System Health view header, click Generate Diagnostic Bundle . For the Filter by clusters drop-down menu, select the clusters for which you want to generate the diagnostic data. For Filter by starting time , specify the date and time (in UTC format) from which you want to include the diagnostic data. Click Download Diagnostic Bundle . 19.3.1. Additional resources Generating a diagnostic bundle | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/operating/use-system-health-dashboard |
Upgrade Red Hat Quay | Upgrade Red Hat Quay Red Hat Quay 3.10 Upgrade Red Hat Quay Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/upgrade_red_hat_quay/index |
Chapter 4. UserIdentityMapping [user.openshift.io/v1] | Chapter 4. UserIdentityMapping [user.openshift.io/v1] Description UserIdentityMapping maps a user to an identity Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources identity ObjectReference Identity is a reference to an identity kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata user ObjectReference User is a reference to a user 4.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/useridentitymappings POST : create an UserIdentityMapping /apis/user.openshift.io/v1/useridentitymappings/{name} DELETE : delete an UserIdentityMapping GET : read the specified UserIdentityMapping PATCH : partially update the specified UserIdentityMapping PUT : replace the specified UserIdentityMapping 4.2.1. /apis/user.openshift.io/v1/useridentitymappings Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create an UserIdentityMapping Table 4.2. Body parameters Parameter Type Description body UserIdentityMapping schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 202 - Accepted UserIdentityMapping schema 401 - Unauthorized Empty 4.2.2. /apis/user.openshift.io/v1/useridentitymappings/{name} Table 4.4. Global path parameters Parameter Type Description name string name of the UserIdentityMapping HTTP method DELETE Description delete an UserIdentityMapping Table 4.5. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified UserIdentityMapping Table 4.7. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified UserIdentityMapping Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified UserIdentityMapping Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.11. Body parameters Parameter Type Description body UserIdentityMapping schema Table 4.12. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/user_and_group_apis/useridentitymapping-user-openshift-io-v1 |
Chapter 2. Admin REST API | Chapter 2. Admin REST API Red Hat build of Keycloak comes with a fully functional Admin REST API with all features provided by the Admin Console. To invoke the API you need to obtain an access token with the appropriate permissions. The required permissions are described in the Server Administration Guide . You can obtain a token by enabling authentication for your application using Red Hat build of Keycloak; see the Securing Applications and Services Guide. You can also use direct access grant to obtain an access token. 2.1. Examples of using CURL 2.1.1. Authenticating with a username and password Note The following example assumes that you created the user admin with the password password in the master realm as shown in the Getting Started Guide tutorial. Procedure Obtain an access token for the user in the realm master with username admin and password password : curl \ -d "client_id=admin-cli" \ -d "username=admin" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" Note By default this token expires in 1 minute The result will be a JSON document. Invoke the API you need by extracting the value of the access_token property. Invoke the API by including the value in the Authorization header of requests to the API. The following example shows how to get the details of the master realm: curl \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ "http://localhost:8080/admin/realms/master" 2.1.2. Authenticating with a service account To authenticate against the Admin REST API using a client_id and a client_secret , perform this procedure. Procedure Make sure the client is configured as follows: client_id is a confidential client that belongs to the realm master client_id has Service Accounts Enabled option enabled client_id has a custom "Audience" mapper Included Client Audience: security-admin-console Check that client_id has the role 'admin' assigned in the "Service Account Roles" tab. curl \ -d "client_id=<YOUR_CLIENT_ID>" \ -d "client_secret=<YOUR_CLIENT_SECRET>" \ -d "grant_type=client_credentials" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" 2.2. Additional resources Server Administration Guide Securing Applications and Services Guide API Documentation | [
"curl -d \"client_id=admin-cli\" -d \"username=admin\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"",
"curl -H \"Authorization: bearer eyJhbGciOiJSUz...\" \"http://localhost:8080/admin/realms/master\"",
"curl -d \"client_id=<YOUR_CLIENT_ID>\" -d \"client_secret=<YOUR_CLIENT_SECRET>\" -d \"grant_type=client_credentials\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\""
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_developer_guide/admin_rest_api |
function::return_str | function::return_str Name function::return_str - Formats the return value as a string Synopsis Arguments format Variable to determine return type base value ret Return value (typically USDreturn ) Description This function is used by the syscall tapset, and returns a string. Set format equal to 1 for a decimal, 2 for hex, 3 for octal. Note that this function is preferred over returnstr . | [
"return_str:string(format:long,ret:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-return-str |
Chapter 36. Google Pubsub | Chapter 36. Google Pubsub Since Camel 2.19 Both producer and consumer are supported. The Google Pubsub component provides access to the Cloud Pub/Sub Infrastructure via the Google Cloud Java Client for Google Cloud Pub/Sub . 36.1. Dependencies When using google-pubsub with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-pubsub-starter</artifactId> </dependency> 36.2. URI Format The Google Pubsub Component uses the following URI format: Destination Name can be either a topic or a subscription name. 36.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 36.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 36.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 36.4. Component Options The Google Pubsub component supports 10 options, which are listed below. Name Description Default Type authenticate (common) Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true boolean endpoint (common) Endpoint to use with local Pub/Sub emulator. String serviceAccountKey (common) The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean synchronousPullRetryableCodes (consumer) Comma-separated list of additional retryable error codes for synchronous pull. By default the PubSub client library retries ABORTED, UNAVAILABLE, UNKNOWN. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean publisherCacheSize (producer) Maximum number of producers to cache. This could be increased if you have producers for lots of different topics. int publisherCacheTimeout (producer) How many milliseconds should each producer stay alive in the cache. int autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean publisherTerminationTimeout (advanced) How many milliseconds should a producer be allowed to terminate. int 36.5. Endpoint Options The Google Pubsub endpoint is configured using URI syntax: with the following path and query parameters: 36.5.1. Path Parameters (2 parameters) Name Description Default Type projectId (common) Required The Google Cloud PubSub Project Id. String destinationName (common) Required The Destination Name. For the consumer this will be the subscription name, while for the producer this will be the topic name. String 36.5.2. Query Parameters (15 parameters) Name Description Default Type authenticate (common) Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true boolean loggerId (common) Logger ID to use when a match to the parent route required. String serviceAccountKey (common) The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String ackMode (consumer) AUTO = exchange gets ack'ed/nack'ed on completion. NONE = downstream process has to ack/nack explicitly. Enum values: AUTO NONE AUTO AckMode concurrentConsumers (consumer) The number of parallel streams consuming from the subscription. 1 Integer maxAckExtensionPeriod (consumer) Set the maximum period a message ack deadline will be extended. Value in seconds. 3600 int maxMessagesPerPoll (consumer) The max number of messages to receive from the server in a single API call. 1 Integer synchronousPull (consumer) Synchronously pull batches of messages. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageOrderingEnabled (producer (advanced)) Should message ordering be enabled. false boolean pubsubEndpoint (producer (advanced)) Pub/Sub endpoint to use. Required when using message ordering, and ensures that messages are received in order even when multiple publishers are used. String serializer (producer (advanced)) Autowired A custom GooglePubsubSerializer to use for serializing message payloads in the producer. GooglePubsubSerializer 36.6. Message Headers The Google Pubsub component supports 5 message header(s), which are listed below: Name Description Default Type CamelGooglePubsubMessageId (common) Constant: MESSAGE_ID The ID of the message, assigned by the server when the message is published. String CamelGooglePubsubMsgAckId (consumer) Constant: ACK_ID The ID used to acknowledge the received message. String CamelGooglePubsubPublishTime (consumer) Constant: PUBLISH_TIME The time at which the message was published. Timestamp CamelGooglePubsubAttributes (common) Constant: ATTRIBUTES The attributes of the message. Map CamelGooglePubsubOrderingKey (producer) Constant: ORDERING_KEY If non-empty, identifies related messages for which publish order should be respected. String 36.7. Producer Endpoints Producer endpoints can accept and deliver to PubSub individual and grouped exchanges alike. Grouped exchanges have Exchange.GROUPED_EXCHANGE property set. Google PubSub expects the payload to be byte[] array, Producer endpoints will send: String body as byte[] encoded as UTF-8 byte[] body as is Everything else will be serialised into byte[] array A Map set as message header GooglePubsubConstants.ATTRIBUTES will be sent as PubSub attributes. Google PubSub supports ordered message delivery. To enable this set set the options messageOrderingEnabled to true, and the pubsubEndpoint to a GCP region. When producing messages set the message header GooglePubsubConstants.ORDERING_KEY . This will be set as the PubSub orderingKey for the message. More information in Ordering messages . Once exchange has been delivered to PubSub the PubSub Message ID will be assigned to the header GooglePubsubConstants.MESSAGE_ID . 36.8. Consumer Endpoints Google PubSub will redeliver the message if it has not been acknowledged within the time period set as a configuration option on the subscription. The component will acknowledge the message once exchange processing has been completed. If the route throws an exception, the exchange is marked as failed and the component will NACK the message - it will be redelivered immediately. To ack/nack the message the component uses Acknowledgement ID stored as header GooglePubsubConstants.ACK_ID . If the header is removed or tampered with, the ack will fail and the message will be redelivered again after the ack deadline. 36.9. Message Body The consumer endpoint returns the content of the message as byte[] - exactly as the underlying system sends it. It is up for the route to convert/unmarshall the contents. 36.10. Authentication Configuration By default this component aquires credentials using GoogleCredentials.getApplicationDefault() . This behavior can be disabled by setting authenticate option to false , in which case requests to Google API will be made without authentication details. This is only desirable when developing against an emulator. This behavior can be altered by supplying a path to a service account key file. 36.11. Rollback and Redelivery The rollback for Google PubSub relies on the idea of the Acknowledgement Deadline - the time period where Google PubSub expects to receive the acknowledgement. If the acknowledgement has not been received, the message is redelivered. Google provides an API to extend the deadline for a message. More information in Google PubSub Documentation . So, rollback is effectively a deadline extension API call with zero value - i.e. deadline is reached now and message can be redelivered to the consumer. It is possible to delay the message redelivery by setting the acknowledgement deadline explicitly for the rollback by setting the message header GooglePubsubConstants.ACK_DEADLINE to the value in seconds. 36.12. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.google-pubsub.authenticate Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true Boolean camel.component.google-pubsub.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.google-pubsub.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.google-pubsub.enabled Whether to enable auto configuration of the google-pubsub component. This is enabled by default. Boolean camel.component.google-pubsub.endpoint Endpoint to use with local Pub/Sub emulator. String camel.component.google-pubsub.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.google-pubsub.publisher-cache-size Maximum number of producers to cache. This could be increased if you have producers for lots of different topics. Integer camel.component.google-pubsub.publisher-cache-timeout How many milliseconds should each producer stay alive in the cache. Integer camel.component.google-pubsub.publisher-termination-timeout How many milliseconds should a producer be allowed to terminate. Integer camel.component.google-pubsub.service-account-key The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.google-pubsub.synchronous-pull-retryable-codes Comma-separated list of additional retryable error codes for synchronous pull. By default the PubSub client library retries ABORTED, UNAVAILABLE, UNKNOWN. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-pubsub-starter</artifactId> </dependency>",
"google-pubsub://project-id:destinationName?[options]",
"google-pubsub:projectId:destinationName"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-google-pubsub-component-starter |
2.7. Managing the glusterd Service | 2.7. Managing the glusterd Service After installing Red Hat Gluster Storage, the glusterd service automatically starts on all the servers in the trusted storage pool. The service can be manually started and stopped by using the glusterd service commands. For more information on creating trusted storage pools, see the Red Hat Gluster Storage 3.5 Administration Guide . Use Red Hat Gluster Storage to dynamically change the configuration of glusterFS volumes without restarting servers or remounting volumes on clients. The glusterFS daemon glusterd also offers elastic volume management. Use the gluster CLI commands to decouple logical storage volumes from physical hardware. This allows the user to grow, shrink, and migrate storage volumes without any application downtime. As storage is added to the cluster, the volumes are distributed across the cluster. This distribution ensures that the cluster is always available despite changes to the underlying hardware. 2.7.1. Manually Starting and Stopping glusterd Use the following instructions to manually start and stop the glusterd service. Manually start glusterd as follows: or Manually stop glusterd as follows: or | [
"/etc/init.d/glusterd start",
"service glusterd start",
"/etc/init.d/glusterd stop",
"service glusterd stop"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/ch02s07 |
Chapter 43. PodService | Chapter 43. PodService 43.1. ExportPods GET /v1/export/pods 43.1.1. Description 43.1.2. Parameters 43.1.2.1. Query Parameters Name Description Required Default Pattern timeout - null query - null 43.1.3. Return Type Stream_result_of_v1ExportPodResponse 43.1.4. Content Type application/json 43.1.5. Responses Table 43.1. HTTP Response Codes Code Message Datatype 200 A successful response.(streaming responses) Stream_result_of_v1ExportPodResponse 0 An unexpected error response. RuntimeError 43.1.6. Samples 43.1.7. Common object reference 43.1.7.1. PodContainerInstanceList Field Name Required Nullable Type Description Format instances List of StorageContainerInstance 43.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 43.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 43.1.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 43.1.7.4. RuntimeStreamError Field Name Required Nullable Type Description Format grpcCode Integer int32 httpCode Integer int32 message String httpStatus String details List of ProtobufAny 43.1.7.5. StorageContainerInstance ContainerInstanceID allows to uniquely identify a container within a cluster. Field Name Required Nullable Type Description Format instanceId StorageContainerInstanceID containingPodId String The pod containing this container instance (kubernetes only). containerName String Container name. containerIps List of string The IP addresses of this container. started Date date-time imageDigest String finished Date The finish time of the container, if it finished. date-time exitCode Integer The exit code of the container. Only valid when finished is populated. int32 terminationReason String The reason for the container's termination, if it finished. 43.1.7.6. StorageContainerInstanceID Field Name Required Nullable Type Description Format containerRuntime StorageContainerRuntime UNKNOWN_CONTAINER_RUNTIME, DOCKER_CONTAINER_RUNTIME, CRIO_CONTAINER_RUNTIME, id String The ID of the container, specific to the given runtime. node String The node on which this container runs. 43.1.7.7. StorageContainerRuntime Enum Values UNKNOWN_CONTAINER_RUNTIME DOCKER_CONTAINER_RUNTIME CRIO_CONTAINER_RUNTIME 43.1.7.8. StoragePod Pod represents information for a currently running pod or deleted pod in an active deployment. Field Name Required Nullable Type Description Format id String name String deploymentId String namespace String clusterId String liveInstances List of StorageContainerInstance terminatedInstances List of PodContainerInstanceList Must be a list of lists, so we can perform search queries (does not work for maps that aren't <string, string>) There is one bucket (list) per container name. started Date Time Kubernetes reports the pod was created. date-time 43.1.7.9. StreamResultOfV1ExportPodResponse Field Name Required Nullable Type Description Format result V1ExportPodResponse error RuntimeStreamError 43.1.7.10. V1ExportPodResponse Field Name Required Nullable Type Description Format pod StoragePod 43.2. GetPods GET /v1/pods GetPods returns the pods. 43.2.1. Description 43.2.2. Parameters 43.2.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 43.2.3. Return Type V1PodsResponse 43.2.4. Content Type application/json 43.2.5. Responses Table 43.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1PodsResponse 0 An unexpected error response. RuntimeError 43.2.6. Samples 43.2.7. Common object reference 43.2.7.1. PodContainerInstanceList Field Name Required Nullable Type Description Format instances List of StorageContainerInstance 43.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 43.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 43.2.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 43.2.7.4. StorageContainerInstance ContainerInstanceID allows to uniquely identify a container within a cluster. Field Name Required Nullable Type Description Format instanceId StorageContainerInstanceID containingPodId String The pod containing this container instance (kubernetes only). containerName String Container name. containerIps List of string The IP addresses of this container. started Date date-time imageDigest String finished Date The finish time of the container, if it finished. date-time exitCode Integer The exit code of the container. Only valid when finished is populated. int32 terminationReason String The reason for the container's termination, if it finished. 43.2.7.5. StorageContainerInstanceID Field Name Required Nullable Type Description Format containerRuntime StorageContainerRuntime UNKNOWN_CONTAINER_RUNTIME, DOCKER_CONTAINER_RUNTIME, CRIO_CONTAINER_RUNTIME, id String The ID of the container, specific to the given runtime. node String The node on which this container runs. 43.2.7.6. StorageContainerRuntime Enum Values UNKNOWN_CONTAINER_RUNTIME DOCKER_CONTAINER_RUNTIME CRIO_CONTAINER_RUNTIME 43.2.7.7. StoragePod Pod represents information for a currently running pod or deleted pod in an active deployment. Field Name Required Nullable Type Description Format id String name String deploymentId String namespace String clusterId String liveInstances List of StorageContainerInstance terminatedInstances List of PodContainerInstanceList Must be a list of lists, so we can perform search queries (does not work for maps that aren't <string, string>) There is one bucket (list) per container name. started Date Time Kubernetes reports the pod was created. date-time 43.2.7.8. V1PodsResponse Field Name Required Nullable Type Description Format pods List of StoragePod | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Stream result of v1ExportPodResponse",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/podservice |
3. More to Come | 3. More to Come The Reference Guide is part of Red Hat's commitment to provide useful and timely support to Red Hat Enterprise Linux users. Future editions feature expanded information on changes to system structure and organization, new and powerful security tools, and other resources to help you extend the power of the Red Hat Enterprise Linux system - and your ability to use it. That is where you can help. 3.1. We Need Feedback! If you find an error in the Reference Guide , or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the component rhel-rg . Be sure to mention the manual's identifier: If you mention the manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily. | [
"rhel-rg"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-intro-more-to-come |
Chapter 5. Console Access | Chapter 5. Console Access When normal (non-root) users log into a computer locally, they are given two types of special permissions: They can run certain programs that they otherwise cannot run. They can access certain files that they otherwise cannot access. These files normally include special device files used to access diskettes, CD-ROMs, and so on. Since there are multiple consoles on a single computer and multiple users can be logged into the computer locally at the same time, one of the users has to essentially win the race to access the files. The first user to log in at the console owns those files. Once the first user logs out, the user who logs in owns the files. In contrast, every user who logs in at the console is allowed to run programs that accomplish tasks normally restricted to the root user. If X is running, these actions can be included as menu items in a graphical user interface. As shipped, these console-accessible programs include halt , poweroff , and reboot . 5.1. Disabling Console Program Access for Non-root Users Non-root users can be denied console access to any program in the /etc/security/console.apps/ directory. To list these programs, run the following command: For each of these programs, console access denial can be configured using the program's Pluggable Authentication Module (PAM) configuration file. For information about PAMs and their usage, see chapter Pluggable Authentication Modules of the Red Hat Enterprise Linux 6 Managing Single Sign-On and Smart Cards guide. PAM configuration file for each program in /etc/security/console.apps/ resides in the /etc/pam.d/ directory and is named the same as the program. Using this file, you can configure PAM to deny access to the program if the user is not root. To do that, insert line auth requisite pam_deny.so directly after the first uncommented line auth sufficient pam_rootok.so . Example 5.1. Disabling Access to the Reboot Program To disable non-root console access to /etc/security/console.apps/reboot , insert line auth requisite pam_deny.so into the /etc/pam.d/reboot PAM configuration file: With this setting, all non-root access to the reboot utility is disabled. Additionally, several programs in /etc/security/console.apps/ partially derive their PAM configuration from the /etc/pam.d/config-util configuration file. This allows to change configuration for all these programs at once by editing /etc/pam.d/config-util . To find all these programs, search for PAM configuration files that refer to the config-util file: Disabling console program access as described above may be useful in environments where the console is otherwise secured. Security measures may include password protection for BIOS and boot loader, disabling rebooting on pressing Ctrl+Alt+Delete, disabling the power and reset switches, and other. In these cases, you may want to restrict normal user's access to halt , poweroff , reboot , and other programs, which by default are accessible from the console. | [
"~]USD ls /etc/security/console.apps abrt-cli-root config-util eject halt poweroff reboot rhn_register setup subscription-manager subscription-manager-gui system-config-network system-config-network-cmd xserver",
"#%PAM-1.0 auth sufficient pam_rootok.so auth requisite pam_deny.so auth required pam_console.so #auth include system-auth account required pam_permit.so",
"~]# grep -l \"config-util\" /etc/pam.d/* /etc/pam.d/abrt-cli-root /etc/pam.d/rhn_register /etc/pam.d/subscription-manager /etc/pam.d/subscription-manager-gui /etc/pam.d/system-config-network /etc/pam.d/system-config-network-cmd"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-console_access |
5.150. libhbaapi | 5.150. libhbaapi 5.150.1. RHBA-2012:0847 - libhbaapi bug fix update Updated libhbaapi packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The libhbaapi library is the Host Bus Adapter (HBA) API library for Fibre Channel and Storage Area Network (SAN) resources. It contains a unified API that programmers can use to access, query, observe and modify SAN and Fibre Channel services. The libhbaapi build environment has been upgraded to upstream version 2.2.5, which provides a number of bug fixes over the version. (BZ# 788504 ) Bug Fix BZ# 806731 Prior to this update, the hba.conf file was not marked for exclusion from verification in the libhbaapi specification file. As a consequence, the file verify function "rpm -V libhbaapi" reported an error in the hba.conf file if the file was changed. This update marks hba.conf in the spec file as "%verify(not md5 size mtime)". Now, the hba.conf file is no longer incorrectly verified. All users of libhbaapi are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libhbaapi |
8.113. libhugetlbfs | 8.113. libhugetlbfs 8.113.1. RHBA-2014:1485 - libhugetlbfs bug fix and enhancement update Updated libhugetlbfs packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libhugetlbfs library interacts with the Linux Huge TLB file system to make large pages available to applications in a transparent manner. Note The libhugetlbfs packages have been upgraded to upstream version 2.16.0, which provides a number of bug fixes and enhancements over the version, including support for the IBM System z architecture. (BZ# 823006 ) Users of libhugetlbfs are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libhugetlbfs |
Chapter 23. Installation and Booting | Chapter 23. Installation and Booting BFS installation fails on VV when automatic LVM partitioning is selected When attempting installation using Boot From SAN (BFS) with an HP StoreServ 3PAR Storage Volume (VV), the installation fails during disk partitioning and LVM volume group activation with the message: The failure is seen across all StoreServ volume types (Std VV, TPVV, TDVV). To work around this problem, if using LVM, select the Custom Partition Layout option and reduce the swap and /home partition size by 1-2 GB. If not using LVM, Select the Standard Partition option. (BZ#1190264) Using the --nocore option in the %packages section of a kickstart file may result in a broken system If the --nocore option is used in the %packages section of a kickstart file, core system packages and libraries will not be installed, which may result in the system being unable to perform essential tasks such as user creation, and may render the system unusable. To avoid this problem, do not use --nocore . (BZ#1191897) The zipl boot loader requires target information in each section When calling the zipl tool manually from a command line using a section name as a parameter, the tool was previously using the target defined in the default section of the /etc/zipl.conf file. In the current version of zipl the default sections' target is not being used automatically, resulting in an error. To work around the problem, manually edit the /etc/zipl.conf configuration file and copy the line starting with target= from the default section to every section. (BZ#1203627) The installer displays the number of multipath devices and number of multipath devices selected incorrectly Multipath devices are configured properly, but the installer displays the number of devices and number of selected devices incorrectly. There is no known workaround at this point. (BZ#914637) The installer displays the amount of disk space within multipath devices incorrectly Multipath devices are configured properly, but the installer displays disk space and number of devices incorrectly. There is no known workaround at this point. (BZ#1014425) | [
"Volume group \"VolGroup\" has insufficient free space."
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/known_issues_installation_and_booting |
Chapter 8. Security policies | Chapter 8. Security policies Virtual machine (VM) workloads run as unprivileged pods. So that VMs can use OpenShift Virtualization features, some pods are granted custom security policies that are not available to other pod owners: An extended container_t SELinux policy applies to virt-launcher pods. Security context constraints (SCCs) are defined for the kubevirt-controller service account. 8.1. About workload security By default, virtual machine (VM) workloads do not run with root privileges in OpenShift Virtualization. For each VM, a virt-launcher pod runs an instance of libvirt in session mode to manage the VM process. In session mode, the libvirt daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege. There are no supported OpenShift Virtualization features that require root privileges. If a feature requires root, it might not be supported for use with OpenShift Virtualization. 8.2. Extended SELinux policies for virt-launcher pods The container_t SELinux policy for virt-launcher pods is extended to enable essential functions of OpenShift Virtualization. The following policy is required for network multi-queue, which enables network performance to scale as the number of available vCPUs increases: allow process self (tun_socket (relabelfrom relabelto attach_queue)) The following policy allows virt-launcher to read files under the /proc directory, including /proc/cpuinfo and /proc/uptime : allow process proc_type (file (getattr open read)) The following policy allows libvirtd to relay network-related debug messages. allow process self (netlink_audit_socket (nlmsg_relay)) Note Without this policy, any attempt to relay network debug messages is blocked. This might fill the node's audit logs with SELinux denials. The following policies allow libvirtd to access hugetblfs , which is required to support huge pages: allow process hugetlbfs_t (dir (add_name create write remove_name rmdir setattr)) allow process hugetlbfs_t (file (create unlink)) The following policies allow virtiofs to mount filesystems and access NFS: allow process nfs_t (dir (mounton)) allow process proc_t (dir (mounton)) allow process proc_t (filesystem (mount unmount)) The following policy is inherited from upstream Kubevirt, where it enables passt networking: allow process tmpfs_t (filesystem (mount)) Note OpenShift Virtualization does not support passt at this time. 8.3. Additional OpenShift Container Platform security context constraints and Linux capabilities for the kubevirt-controller service account Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. The virt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These pods are granted permissions by the kubevirt-controller service account. The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to use OpenShift Virtualization features that are beyond the scope of typical pods. The kubevirt-controller service account is granted the following SCCs: scc.AllowHostDirVolumePlugin = true This allows virtual machines to use the hostpath volume plugin. scc.AllowPrivilegedContainer = false This ensures the virt-launcher pod is not run as a privileged container. scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE", "SYS_PTRACE"} SYS_NICE allows setting the CPU affinity. NET_BIND_SERVICE allows DHCP and Slirp operations. SYS_PTRACE enables certain versions of libvirt to find the process ID (PID) of swtpm , a software Trusted Platform Module (TPM) emulator. 8.3.1. Viewing the SCC and RBAC definitions for the kubevirt-controller You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool: USD oc get scc kubevirt-controller -o yaml You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool: USD oc get clusterrole kubevirt-controller -o yaml 8.4. Additional resources Managing security context constraints Using RBAC to define and apply permissions Optimizing virtual machine network performance in the Red Hat Enterprise Linux (RHEL) documentation Using huge pages with virtual machines Configuring huge pages in the RHEL documentation | [
"oc get scc kubevirt-controller -o yaml",
"oc get clusterrole kubevirt-controller -o yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/virt-additional-security-privileges-controller-and-launcher |
Image Builder Guide | Image Builder Guide Red Hat Enterprise Linux 7 Creating customized system images with Image Builder Eliane Pereira Red Hat Customer Content Services [email protected] Vladimir Slavik Red Hat Customer Content Services Abstract Image Builder is a tool for creating deployment-ready customized system images, for example installation disks, virtual machines, cloud vendor-specific images, and others. Image Builder enables you to create these images faster compared to manual procedures, because it abstracts away the specifics of each output type. Learn how to set up Image Builder and create images with it. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/index |
Storage | Storage OpenShift Container Platform 4.10 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"df -h /var/lib",
"Filesystem Size Used Avail Use% Mounted on /dev/sda1 69G 32G 34G 49% /",
"oc delete pv <pv-name>",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s",
"oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:",
"oc get pv <pv-claim>",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi",
"apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3",
"securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1",
"cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF",
"cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF",
"cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF",
"oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false",
"apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5",
"oc create -f cinder-persistentvolume.yaml",
"oc create serviceaccount <service_account>",
"oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4",
"{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }",
"{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar",
"\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"",
"apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''",
"apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4",
"oc create -f pv.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual",
"oc create -f pvc.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false",
"oc adm new-project openshift-local-storage",
"oc annotate namespace openshift-local-storage openshift.io/node-selector=''",
"oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'",
"OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"USD{OC_VERSION}\" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f openshift-local-storage.yaml",
"oc -n openshift-local-storage get pods",
"NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m",
"oc get csvs -n openshift-local-storage",
"NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"localblock-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6",
"oc create -f <local-volume>.yaml",
"oc get all -n openshift-local-storage",
"NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"oc create -f <example-pv>.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4",
"oc create -f <local-pvc>.yaml",
"apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3",
"oc create -f <local-pod>.yaml",
"apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM",
"oc apply -f local-volume-set.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"localblock-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg",
"spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists",
"oc edit localvolume <name> -n openshift-local-storage",
"oc delete pv <pv-name>",
"oc debug node/<node-name>",
"chroot /host",
"cd /mnt/openshift-local-storage/<sc-name> 1",
"rm <symlink>",
"oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces",
"oc delete pv <pv-name>",
"oc delete project openshift-local-storage",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7",
"oc get pv",
"NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m",
"ls -lZ /opt/nfs -d",
"drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs",
"id nfsnobody",
"uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)",
"spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2",
"spec: containers: 1 - name: securityContext: runAsUser: 65534 2",
"setsebool -P virt_use_nfs 1",
"/<example_fs> *(rw,root_squash)",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3",
"oc create -f pvc.yaml",
"vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk",
"shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5",
"oc create -f pv1.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4",
"oc create -f pvc1.yaml",
"oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF",
"oc new-app mysql-persistent",
"--> Deploying template \"openshift/mysql-persistent\" to project default",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s",
"kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar",
"oc create -f my-csi-app.yaml",
"oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedSecret metadata: name: my-share spec: secretRef: name: <name of secret> namespace: <namespace of secret> EOF",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF",
"oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder",
"oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedSecret: my-share EOF",
"oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedConfigMap metadata: name: my-share spec: configMapRef: name: <name of configmap> namespace: <namespace of configmap> EOF",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedconfigmaps resourceNames: - my-share verbs: - use EOF",
"create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder",
"oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedConfigMap: my-share EOF",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete",
"oc create -f volumesnapshotclass.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2",
"oc create -f volumesnapshot-dynamic.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1",
"oc create -f volumesnapshot-manual.yaml",
"oc describe volumesnapshot mysnap",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi",
"oc get volumesnapshotcontent",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1",
"oc delete volumesnapshot <volumesnapshot_name>",
"volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted",
"oc delete volumesnapshotcontent <volumesnapshotcontent_name>",
"oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f pvc-restore.yaml",
"oc get pvc",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1",
"oc create -f pvc-clone.yaml",
"oc get pvc pvc-1-clone",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: CustomNoUpgrade customNoUpgrade: enabled: - CSIMigrationAWS 1",
"apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-sa",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com",
"2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-",
"oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml",
"oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml",
"secret/aws-efs-cloud-credentials created",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi",
"apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3",
"oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created",
"oc get clustercsidriver efs.csi.aws.com -o yaml",
"oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition",
"oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type> 2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF",
"oc get storageclass",
"oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s 1",
"oc get co storage",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE storage 4.10.0-0.nightly-2021-11-15-034648 True False False 4m36s",
"oc get pod -n openshift-cluster-csi-drivers",
"NAME READY STATUS RESTARTS AGE azure-file-csi-driver-controller-5949bf45fd-pm4qb 11/11 Running 0 39m azure-file-csi-driver-node-2tcxr 3/3 Running 0 53m azure-file-csi-driver-node-2xjzm 3/3 Running 0 53m azure-file-csi-driver-node-6wrgk 3/3 Running 0 53m azure-file-csi-driver-node-frvx2 3/3 Running 0 53m azure-file-csi-driver-node-lf5kb 3/3 Running 0 53m azure-file-csi-driver-node-mqdhh 3/3 Running 0 53m azure-file-csi-driver-operator-7d966fc6c5-x74x5 1/1 Running 0 44m",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 10m 1 managed-csi disk.csi.azure.com Delete WaitForFirstConsumer true 35m managed-premium (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true 35m",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1",
"oc describe storageclass csi-gce-pd-cmek",
"Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi",
"oc apply -f pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f cinder-claim.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2",
"oc create -f pvc-manila.yaml",
"oc get pvc pvc-manila",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size> 2 volumeMode: <volume mode> 3",
"oc create -f pvc-ovirt.yaml",
"oc get pvc pvc-ovirt",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: \"USDopenshift-storage-policy-xxxx\" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: resources: requests: storage: 1Gi accessModes: - ReadWriteMany storageClassName: thin-csi",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"VSphereCSIDriverOperatorCRUpgradeable: VMwareVSphereControllerUpgradeable: found existing unsupported csi.vsphere.vmware.com driver",
"~ USD oc delete CSIDriver csi.vsphere.vmware.com",
"csidriver.storage.k8s.io \"csi.vsphere.vmware.com\" deleted",
"oc edit storageclass <storage_class_name> 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>",
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']",
"oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3",
"oc get storageclass",
"NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/storage/index |
Chapter 64. Placing custom tasks | Chapter 64. Placing custom tasks When a custom task is registered in Red Hat Process Automation Manager it appears in the process designer palette. The custom task is named and categorized according to the entries in its corresponding WID file. Prerequisites A custom task is registered in Red Hat Process Automation Manager. For more information, see Chapter 63, Registering custom tasks . The custom task is named and categorized according to the corresponding WID file. For more information about WID file locations or formatting, see Chapter 61, Work item definitions . Procedure In Business Central, go to Menu Design Projects and click a project. Select the business process that you want to add a custom task to. Select the custom task from the palette and drag it to the BPMN2 diagram. Optional: Change the custom task attributes. For example, change the data output and input from the corresponding WID file. Note If the WID file is not visible in your project and no Work Item Definition object is visible in the Others category of your project, you must register the custom task. For more information about registering a custom task, Chapter 63, Registering custom tasks . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/custom-tasks-placing-custom-tasks-proc-custom-tasks |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 8 release of Eclipse Temurin includes, see OpenJDK 8u402 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements that the Eclipse Temurin 8.0.402 release provides: Kerberos configuration support for yes and no boolean values In OpenJDK 8.0.402, when using Kerberos configuration settings that require a boolean value, the krb5.conf file also accepts yes and no values as alternatives to true and false . See JDK-8029995 (JDK Bug System) . Increased default value of jdk.jar.maxSignatureFileSize system property OpenJDK 8.0.382 introduced a jdk.jar.maxSignatureFileSize system property for configuring the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file (JDK-8300596). By default, the jdk.jar.maxSignatureFileSize property was set to 8000000 bytes (8 MB), which was too small for some JAR files, such as the Mend (formerly WhiteSource) Unified Agent JAR file. OpenJDK 8.0.402 increases the default value of the jdk.jar.maxSignatureFileSize property to 16000000 bytes (16 MB). See JDK-8312489 (JDK Bug System) Let's Encrypt ISRG Root X2 CA certificate added In OpenJDK 8.0.402, the cacerts truststore includes the Internet Security Research Group (ISRG) Root X2 certificate authority (CA) certificate from Let's Encrypt: Name: Let's Encrypt Alias name: letsencryptisrgx2 Distinguished name: CN=ISRG Root X2, O=Internet Security Research Group, C=US See JDK-8317374 (JDK Bug System) . Digicert, Inc. root certificates added In OpenJDK 8.0.402, the cacerts truststore includes four Digicert, Inc. root certificates: Certificate 1 Name: DigiCert, Inc. Alias name: digicertcseccrootg5 Distinguished name: CN=DigiCert CS ECC P384 Root G5, O="DigiCert, Inc.", C=US Certificate 2 Name: DigiCert, Inc. Alias name: digicertcsrsarootg5 Distinguished name: CN=DigiCert CS RSA4096 Root G5, O="DigiCert, Inc.", C=US Certificate 3 Name: DigiCert, Inc. Alias name: digicerttlseccrootg5 Distinguished name: CN=DigiCert TLS ECC P384 Root G5, O="DigiCert, Inc.", C=US Certificate 4 Name: DigiCert, Inc. Alias name: digicerttlsrsarootg5 Distinguished name: CN=DigiCert TLS RSA4096 Root G5, O="DigiCert, Inc.", C=US See JDK-8318759 (JDK Bug System) . eMudhra Technologies Limited root certificates added In OpenJDK 8.0.402, the cacerts truststore includes three eMudhra Technologies Limited root certificates: Certificate 1 Name: eMudhra Technologies Limited Alias name: emsignrootcag1 Distinguished name: CN=emSign Root CA - G1, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN Certificate 2 Name: eMudhra Technologies Limited Alias name: emsigneccrootcag3 Distinguished name: CN=emSign ECC Root CA - G3, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN Certificate 3 Name: eMudhra Technologies Limited Alias name: emsignrootcag2 Distinguished name: CN=emSign Root CA - G2, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN See JDK-8319187 (JDK Bug System) . Telia Root CA v2 certificate added In OpenJDK 8.0.402, the cacerts truststore includes the Telia Root CA v2 certificate: Name: Telia Root CA v2 Alias name: teliarootcav2 Distinguished name: CN=Telia Root CA v2, O=Telia Finland Oyj, C=FI See JDK-8317373 (JDK Bug System) . Revised on 2024-05-10 09:07:40 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.402_release_notes/openjdk-temurin-features-8.0.402_openjdk |
3.3. NFS Share Setup | 3.3. NFS Share Setup The following procedure configures the NFS share for the NFS daemon failover. You need to perform this procedure on only one node in the cluster. Create the /nfsshare directory. Mount the ext4 file system that you created in Section 3.2, "Configuring an LVM Volume with an ext4 File System" on the /nfsshare directory. Create an exports directory tree on the /nfsshare directory. Place files in the exports directory for the NFS clients to access. For this example, we are creating test files named clientdatafile1 and clientdatafile2 . Unmount the ext4 file system and deactivate the LVM volume group. | [
"mkdir /nfsshare",
"mount /dev/my_vg/my_lv /nfsshare",
"mkdir -p /nfsshare/exports mkdir -p /nfsshare/exports/export1 mkdir -p /nfsshare/exports/export2",
"touch /nfsshare/exports/export1/clientdatafile1 touch /nfsshare/exports/export2/clientdatafile2",
"umount /dev/my_vg/my_lv vgchange -an my_vg"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-nfssharesetup-haaa |
Chapter 12. Using images | Chapter 12. Using images 12.1. Using images overview Use the following topics to discover the different Source-to-Image (S2I), database, and other container images that are available for OpenShift Container Platform users. Red Hat official container images are provided in the Red Hat Registry at registry.redhat.io . OpenShift Container Platform's supported S2I, database, and Jenkins images are provided in the openshift4 repository in the Red Hat Quay Registry. For example, quay.io/openshift-release-dev/ocp-v4.0-<address> is the name of the OpenShift Application Platform image. The xPaaS middleware images are provided in their respective product repositories on the Red Hat Registry but suffixed with a -openshift . For example, registry.redhat.io/jboss-eap-6/eap64-openshift is the name of the JBoss EAP image. All Red Hat supported images covered in this section are described in the Container images section of the Red Hat Ecosystem Catalog . For every version of each image, you can find details on its contents and usage. Browse or search for the image that interests you. Important The newer versions of container images are not compatible with earlier versions of OpenShift Container Platform. Verify and use the correct version of container images, based on your version of OpenShift Container Platform. 12.2. Source-to-image You can use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. You can use the Red Hat Java Source-to-Image for OpenShift documentation as a reference for runtime environments that use Java. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images include: .NET Java Go Node.js Perl PHP Python Ruby S2I images are available for you to use directly from the OpenShift Container Platform web console by following procedure: Log in to the OpenShift Container Platform web console using your login credentials. The default view for the OpenShift Container Platform web console is the Administrator perspective. Use the perspective switcher to switch to the Developer perspective. In the +Add view, use the Project drop-down list to select an existing project or create a new project. Click All services in the Developer Catalog tile. Click Builder Images under Type to see the available S2I images. S2I images are also available though the Cluster Samples Operator . 12.2.1. Source-to-image build process overview Source-to-image (S2I) produces ready-to-run images by injecting source code into a container that prepares that source code to be run. It performs the following steps: Runs the FROM <builder image> command Copies the source code to a defined location in the builder image Runs the assemble script in the builder image Sets the run script in the builder image as the default command Buildah then creates the container image. 12.2.2. Additional resources Configuring the Cluster Samples Operator Using build strategies Troubleshooting the Source-to-Image process Creating images from source code with source-to-image About testing source-to-image images Creating images from source code with source-to-image 12.3. Customizing source-to-image images Source-to-image (S2I) builder images include assemble and run scripts, but the default behavior of those scripts is not suitable for all users. You can customize the behavior of an S2I builder that includes default scripts. 12.3.1. Invoking scripts embedded in an image Builder images provide their own version of the source-to-image (S2I) scripts that cover the most common use-cases. If these scripts do not fulfill your needs, S2I provides a way of overriding them by adding custom ones in the .s2i/bin directory. However, by doing this, you are completely replacing the standard scripts. In some cases, replacing the scripts is acceptable, but, in other scenarios, you can run a few commands before or after the scripts while retaining the logic of the script provided in the image. To reuse the standard scripts, you can create a wrapper script that runs custom logic and delegates further work to the default scripts in the image. Procedure Look at the value of the io.openshift.s2i.scripts-url label to determine the location of the scripts inside of the builder image: USD podman inspect --format='{{ index .Config.Labels "io.openshift.s2i.scripts-url" }}' wildfly/wildfly-centos7 Example output image:///usr/libexec/s2i You inspected the wildfly/wildfly-centos7 builder image and found out that the scripts are in the /usr/libexec/s2i directory. Create a script that includes an invocation of one of the standard scripts wrapped in other commands: .s2i/bin/assemble script #!/bin/bash echo "Before assembling" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo "After successful assembling" else echo "After failed assembling" fi exit USDrc This example shows a custom assemble script that prints the message, runs the standard assemble script from the image, and prints another message depending on the exit code of the assemble script. Important When wrapping the run script, you must use exec for invoking it to ensure signals are handled properly. The use of exec also precludes the ability to run additional commands after invoking the default image run script. .s2i/bin/run script #!/bin/bash echo "Before running application" exec /usr/libexec/s2i/run | [
"podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7",
"image:///usr/libexec/s2i",
"#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc",
"#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/images/using-images |
5.5. Stopping firewalld | 5.5. Stopping firewalld To stop firewalld , enter the following command as root : To prevent firewalld from starting automatically at system start, enter the following command as root : To make sure firewalld is not started by accessing the firewalld D-Bus interface and also if other services require firewalld , enter the following command as root : | [
"~]# systemctl stop firewalld",
"~]# systemctl disable firewalld",
"~]# systemctl mask firewalld"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/bh-stopping_firewalld |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.4/making-open-source-more-inclusive |
33.8. Defining DNS Query Policy | 33.8. Defining DNS Query Policy To resolve host names within the DNS domain, a DNS client issues a query to the DNS name server. For some security contexts or for performance, it might be advisable to restrict what clients can query DNS records in the zone. DNS queries can be configured when the zone is created or when it is modified by using the --allow-query option with the ipa dnszone-mod command to set a list of clients which are allowed to issue queries. For example: The default --allow-query value is any , which allows the zone to be queried by any client. | [
"[user@server ~]USD ipa dnszone-mod --allow-query=192.0.2.0/24;2001:DB8::/32;203.0.113.1 example.com"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/dns-queries |
Chapter 5. Next steps | Chapter 5. steps After completing the tutorial, consider the following steps: Explore the tutorial further. Use the MySQL command line client to add, modify, and remove rows in the database tables, and see the effect on the topics. Keep in mind that you cannot remove a row that is referenced by a foreign key. Plan a Debezium deployment. You can install Debezium in OpenShift or on Red Hat Enterprise Linux. For more information, see the following: Installing Debezium on OpenShift Installing Debezium on RHEL Revised on 2023-11-17 04:10:18 UTC | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/getting_started_with_debezium/next-steps |
Chapter 30. Kernel | Chapter 30. Kernel libcgroup no longer truncates the values of cgroup subsystem parameters that are longer than 100 characters Previously, the internal representation of a value of any cgroup subsystem parameter was limited to have the length of 100 characters at maximum. Consequently, the libcgroup library truncated the values longer than 100 characters before writing them to a file representing matching cgroup subsystem parameter in the kernel. With this update, the maximal length of values of cgroup subsystem parameters in libcgroup has been extended to 4096 characters. As a result, libcgroup now handles values of cgroup subsystem parameters with any length correctly. (BZ#1549175) The mlx5 device no longer contains a firmware issue Previously, the mlx5 device contained a firmware issue, which caused that the link of mlx5 devices in certain situation dropped after rebooting a system. As a consequence, a message similar to the following was seen in the output of the dmesg command: The issue is fixed in the latest firmware of this device. Contact your hardware vendor for information on how to obtain and install the latest firmware for your mlx5 device. (BZ#1636930) | [
"mlx5_core 0000:af:00.0: Port module event[error]: module 0, Cable error, Bus stuck(I2C or data shorted)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/bug_fixes_kernel |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/proc_providing-feedback-on-red-hat-documentation |
2.4. Battery Life Tool Kit | 2.4. Battery Life Tool Kit Red Hat Enterprise Linux 7 introduces the Battery Life Tool Kit ( BLTK ), a test suite that simulates and analyzes battery life and performance. BLTK achieves this by performing sets of tasks that simulate specific user groups and reporting on the results. Although developed specifically to test notebook performance, BLTK can also report on the performance of desktop computers when started with the -a . BLTK allows you to generate very reproducible workloads that are comparable to real use of a machine. For example, the office workload writes a text, corrects things in it, and does the same for a spreadsheet. Running BLTK combined with PowerTOP or any of the other auditing or analysis tool allows you to test if the optimizations you performed have an effect when the machine is actively in use instead of only idling. Because you can run the exact same workload multiple times for different settings, you can compare results for different settings. Install BLTK with the command: Run BLTK with the command: For example, to run the idle workload for 120 seconds: The workloads available by default are: -I , --idle system is idle, to use as a baseline for comparison with other workloads -R , --reader simulates reading documents (by default, with Firefox ) -P , --player simulates watching multimedia files from a CD or DVD drive (by default, with mplayer ) -O , --office simulates editing documents with the OpenOffice.org suite Other options allow you to specify: -a , --ac-ignore ignore whether AC power is available (necessary for desktop use) -T number_of_seconds , --time number_of_seconds the time (in seconds) over which to run the test; use this option with the idle workload -F filename , --file filename specifies a file to be used by a particular workload, for example, a file for the player workload to play instead of accessing the CD or DVD drive -W application , --prog application specifies an application to be used by a particular workload, for example, a browser other than Firefox for the reader workload BLTK supports a large number of more specialized options. For details, see the bltk man page. BLTK saves the results that it generates in a directory specified in the /etc/bltk.conf configuration file - by default, ~/.bltk/ workload .results. number / . For example, the ~/.bltk/reader.results.002/ directory holds the results of the third test with the reader workload (the first test is not numbered). The results are spread across several text files. To condense these results into a format that is easy to read, run: The results now appear in a text file named Report in the results directory. To view the results in a terminal emulator instead, use the -o option: | [
"~]# yum install bltk",
"~]USD bltk workload options",
"~]USD bltk -I -T 120",
"~]USD bltk_report path_to_results_directory",
"~]USD bltk_report -o path_to_results_directory"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/bltk |
Install | Install Red Hat Advanced Cluster Management for Kubernetes 2.11 Installation | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/install/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.