title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 13. Score calculation performance tricks | Chapter 13. Score calculation performance tricks Most of the execution time of a solver involves running the score calculation, which is called in the solver's deepest loops. Faster score calculation returns the same solution in less time with the same algorithm. This usually provides a better solution in the same amount of time. Use the following techniques to improve your score calculation performance. 13.1. Score calculation speed When you are improving your score calculation, focus on maximizing the score calculation speed instead of maximizing the best score. A big improvement in score calculation can sometimes yield little or no best score improvement, for example when the algorithm is stuck in a local or global optima. If you are watching the calculation speed instead, score calculation improvements are far more visible. The score calculation speed per second is a reliable measurement of score calculation performance, even though it is affected by non-score calculation execution time. The result depends on the problem scale of the problem data set. Normally, even for high scale problems, the score calculation speed per second is higher than 1000 , unless you are using an EasyScoreCalculator class. By watching the calculation speed, you can remove or add score constraints and compare the latest calculation speed with the original calculation speed. Note Comparing the best score with the original best score is pointless. It's like comparing apples and oranges. 13.2. Incremental score calculation Incremental score calculation is also known as delta-based score calculation. When a solution changes, incremental score calculation finds the new score by evaluating changes between the current state and the state instead of recalculating the entire score on every solution evaluation. For example, in the N Queens problem, when queen A moves from row 1 to 2 , the incrementalScoreCalculation class does not check whether queen B and C can attack each other, because neither of them changed position, as shown in the following illustration: The following example show incremental score calculation for employee rostering: Incremental score calculation provides a significant performance and scalability gain. Constraint streams or Drools score calculation provides this scalability gain without forcing you to write a complicated incremental score calculation algorithm. Just let the rule engine do the hard work. Notice that the increase in calculation speed is relative to the size of your planning problem (your n ). This makes incremental score calculation scalable. 13.3. Remote services Do not call remote services in your score calculation unless you are bridging the EasyScoreCalculator class to a legacy system. The network latency will significantly downgrade your score calculation performance. Cache the results of those remote services if possible. If some parts of a constraint can be calculated once, when the solver starts, and never change during solving, then turn them into cached problem facts. 13.4. Pointless constraints If you know that a specific constraint can never be broken or that it is always broken, do not write a score constraint for it. For example, in the N Queens problem, the score calculation does not check whether multiple queens occupy the same column because a queen's column never changes and every solution starts with each queen on a different column. Note Do not overuse this technique. If some data sets do not use a specific constraint but others do, just return out of the constraint as soon as you can. There is no need to dynamically change your score calculation based on the data set. 13.5. Built-in hard constraints Instead of implementing a hard constraint, the hard constraint can sometimes be built in. For example, in the school timetabling example, if Lecture A should never be assigned to Room X, but it uses the ValueRangeProvider class on Solution , the Solver will often try to assign it to Room X only to discover that it breaks a hard constraint. Use a ValueRangeProvider on the planning entity or filtered selection to define that Lecture A should only be assigned a Room different than X. This can give a good performance gain in some use cases, not just because the score calculation is faster, but because most optimization algorithms will spend less time evaluating infeasible solutions. However, usually this is not a good idea because there is a real risk of trading short term benefits for long term harm: Many optimization algorithms rely on the freedom to break hard constraints when changing planning entities, to get out of local optima. Both implementation approaches have limitations such as feature compatibility and disabling automatic performance optimizations. 13.6. Score traps Make sure that none of your score constraints cause a score trap. A trapped score constraint applies the same weight to multiple constraint matches. Doing this groups constraint matches together and creates a flatlined score function for that constraint. This can cause a solution state in which several moves must be done to resolve or lower the weight of that single constraint. The following examples illustrate score traps: You need two doctors at each operating table but you are only moving one doctor at a time. The solver has no incentive to move a doctor to a table with no doctors. To fix this, penalize a table with no doctors more than a table with only one doctor in that score constraint in the score function. Two exams must be conducted at the same time, but you are only moving one exam at a time. The solver must move one of those exams to another time slot without moving the other in the same move. To fix this, add a coarse-grained move that moves both exams at the same time. The following illustration shows a score trap: If the blue item moves from an overloaded computer to an empty computer, the hard score should improve. But the trapped score implementation cannot do that. The solver should eventually get out of this trap, but it will take a lot of effort, especially if there are even more processes on the overloaded computer. Before it can do that, it might actually start moving more processes into that overloaded computer, because there is no penalty for doing so. Note Avoiding score traps does not mean that your score function should be smart enough to avoid local optima. Leave it to the optimization algorithms to deal with the local optima. Avoiding score traps means to avoid, for each score constraint individually, a flatlined score function. Important Always specify the degree of infeasibility. The business often says "if the solution is infeasible, it does not matter how infeasible it is." While that is true for the business, it is not true for score calculation because score calculation benefits from knowing how infeasible a solution is. In practice, soft constraints usually do this naturally and it is just a matter of doing it for the hard constraints too. There are several ways to deal with a score trap: Improve the score constraint to make a distinction in the score weight. For example, penalize -1hard for every missing CPU instead of just -1hard if any CPU is missing. If changing the score constraint is not allowed from the business perspective, add a lower score level with a score constraint that makes such a distinction. For example, penalize -1subsoft for every missing CPU, on top of -1hard if any CPU is missing. The business ignores the subsoft score level. Add coarse-grained moves and union-select them with the existing fine-grained moves. A coarse-grained move effectively does multiple moves to directly get out of a score trap with a single move. For example, move multiple items from the same container to another container. 13.7. The stepLimit benchmark Not all score constraints have the same performance cost. Sometimes one score constraint can kill the score calculation performance outright. Use the benchmarker to perform a one minute run and then check what happens to the score calculation speed if you comment out all but one of the score constraints. 13.8. Fairness score constraints Some use cases have a business requirement to provide a fair schedule, usually as a soft score constraint, for example: To avoid envy, fairly distribute the workload among the employees. To improve reliability, evenly distribute the workload among assets. Implementing such a constraint might seem difficult, especially because there are different ways to formalize fairness, but usually the squared workload implementation behaves in the most desirable manner. For each employee or asset, specify the workload as w and subtract w2 from the score. The squared workload implementation guarantees that if you select two employees from a specified solution and make the distribution between those two employees fairer, then the resulting new solution will have a better overall score. Do not use only the difference from the average workload because that can lead to unfairness, as demonstrated in the following illustration: Note Instead of the squared workload implementation, it is also possible to use the variance (squared difference to the average) or the standard deviation (square root of the variance). This has no effect on the score comparison, because the average does not change during planning. It is just more work to implement because the average needs to be known and trivially slower because the calculation takes a little longer. When the workload is perfectly balanced, users often like to see a 0 score, instead of the distracting -34soft . This is shown in the preceding image for the last solution which is almost perfectly balanced. To nullify this, either add the average multiplied by the number of entities to the score or show the variance or standard deviation in the UI. 13.9. Other score calculation performance tricks Use the following tips to further improve your score calculation performance: Verify that your score calculation occurs in the correct number type. For example, if you are adding values of the type int , do not store the result as the type double , which takes longer to calculate. For optimal performance, use the latest Java version. For example, you can achieve a ~10 % performance increase by switching from Java 11 to 17. Always remember that premature optimization is very undesireable. Make sure your design is flexible enough to allow configuration-based adjustments. 13.10. Configuing constraints Deciding the correct weight and level for each constraint is not easy. It often involves negotiating with different stakeholders and their priorities. Furthermore, quantifying the impact of soft constraints is often a new experience for business managers, so they need a number of iterations to get it right. To make this easier, use the @ConstraintConfiguration class with constraint weights and parameters. Then, provide a UI so business managers can adjust the constraint weights themselves and visualize the resulting solution, as shown in the following illustration: For example, in the conference scheduling problem, the minimum pause constraint has a constraint weight, but it also has a constraint parameter that defines the length of time between two talks by the same speaker. The pause length depends on the conference: in some large conferences 20 minutes isn't enough time to go from one room to the other and in smaller conferences 10 minutes can be enough time. The pause length is a field in the constraint configuration without a @ConstraintWeight annotation. Each constraint has a constraint package and a constraint name and together they form the constraint ID. The constraint ID connects the constraint weight with the constraint implementation. For each constraint weight, there must be a constraint implementation with the same package and the same name. The @ConstraintConfiguration annotation has a constraintPackage property that defaults to the package of the constraint configuration class. Cases with constraint streams normally do not need to specify it. The @ConstraintWeight annotation has a value which is the constraint name (for example "Speaker conflict"). It inherits the constraint package from the @ConstraintConfiguration , but it can override that, for example @ConstraintWeight(constraintPackage = "... region.france", ... ) to use a different constraint package than some other weights. So every constraint weight ends up with a constraint package and a constraint name. Each constraint weight links with a constraint implementation, for example in constraint streams: public final class ConferenceSchedulingConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory factory) { return new Constraint[] { speakerConflict(factory), themeTrackConflict(factory), contentConflict(factory), ... }; } protected Constraint speakerConflict(ConstraintFactory factory) { return factory.forEachUniquePair(...) ... .penalizeConfigurable("Speaker conflict", ...); } protected Constraint themeTrackConflict(ConstraintFactory factory) { return factory.forEachUniquePair(...) ... .penalizeConfigurable("Theme track conflict", ...); } protected Constraint contentConflict(ConstraintFactory factory) { return factory.forEachUniquePair(...) ... .penalizeConfigurable("Content conflict", ...); } ... } Each of the constraint weights defines the score level and score weight of their constraint. The constraint implementation calls rewardConfigurable() or penalizeConfigurable() and the constraint weight is automatically applied. If the constraint implementation provides a match weight, that match weight is multiplied with the constraint weight. For example, the content conflict constraint weight defaults to 100soft and the constraint implementation penalizes each match based on the number of shared content tags and the overlapping duration of the two talks: @ConstraintWeight("Content conflict") private HardMediumSoftScore contentConflict = HardMediumSoftScore.ofSoft(100); Constraint contentConflict(ConstraintFactory factory) { return factory.forEachUniquePair(Talk.class, overlapping(t -> t.getTimeslot().getStartDateTime(), t -> t.getTimeslot().getEndDateTime()), filtering((talk1, talk2) -> talk1.overlappingContentCount(talk2) > 0)) .penalizeConfigurable("Content conflict", (talk1, talk2) -> talk1.overlappingContentCount(talk2) * talk1.overlappingDurationInMinutes(talk2)); } So when 2 overlapping talks share only 1 content tag and overlap by 60 minutes, the score is impacted by -6000soft . But when 2 overlapping talks share 3 content tags, the match weight is 180, so the score is impacted by -18000soft . Procedure Create a new class to hold the constraint weights and other constraint parameters, for example ConferenceConstraintConfiguration . Annotate this class with @ConstraintConfiguration : @ConstraintConfiguration public class ConferenceConstraintConfiguration { ... } Add the constraint configuration on the planning solution and annotate that field or property with @ConstraintConfigurationProvider : @PlanningSolution public class ConferenceSolution { @ConstraintConfigurationProvider private ConferenceConstraintConfiguration constraintConfiguration; ... } In the constraint configuration class, add a @ConstraintWeight property for each constraint and give each constraint weight a default value: @ConstraintConfiguration(constraintPackage = "...conferencescheduling.score") public class ConferenceConstraintConfiguration { @ConstraintWeight("Speaker conflict") private HardMediumSoftScore speakerConflict = HardMediumSoftScore.ofHard(10); @ConstraintWeight("Theme track conflict") private HardMediumSoftScore themeTrackConflict = HardMediumSoftScore.ofSoft(10); @ConstraintWeight("Content conflict") private HardMediumSoftScore contentConflict = HardMediumSoftScore.ofSoft(100); ... } The @ConstraintConfigurationProvider annotation automatically exposes the constraint configuration as a problem fact. There is no need to add a @ProblemFactProperty annotation. A constraint weight cannot be null. Expose the constraint weights in a UI so business users can tweak the values. The preceding example uses the ofHard() , ofMedium() and ofSoft() methods to do that. Notice how it defaults the content conflict constraint as ten times more important than the theme track conflict constraint. Normally, a constraint weight only uses one score level, but it's possible to use multiple score levels (at a small performance cost). 13.11. Explaining the score There are several ways to show how the OptaPlanner score is derived. This is called explaining the score: Print the return value of getSummary() . This is the easiest way to explain the score during development, but only use this method for diagnostic purposes. Use the ScoreManager API in an application or web UI. Break down the score for each constraint for a more granular view. Procedure Use one of the following methods to explain the score: Print the return value of getSummary() : System.out.println(scoreManager.getSummary(solution)); The following conference scheduling example prints that talk S51 is responsible for breaking the hard constraint Speaker required room tag : Explanation of score (-1hard/-806soft): Constraint match totals: -1hard: constraint (Speaker required room tag) has 1 matches: -1hard: justifications ([S51]) -340soft: constraint (Theme track conflict) has 32 matches: -20soft: justifications ([S68, S66]) -20soft: justifications ([S61, S44]) ... ... Indictments (top 5 of 72): -1hard/-22soft: justification (S51) has 12 matches: -1hard: constraint (Speaker required room tag) -10soft: constraint (Theme track conflict) ... ... Important Do not attempt to parse this string or use it in your UI or exposed services. Instead use the ConstraintMatch API. Use the ScoreManager API in an application or web UI. Enter code similar to the following example: ScoreManager<CloudBalance, HardSoftScore> scoreManager = ScoreManager.create(solverFactory); ScoreExplanation<CloudBalance, HardSoftScore> scoreExplanation = scoreManager.explainScore(cloudBalance); Use this code when you need to calculate the score of a solution: HardSoftScore score = scoreExplanation.getScore(); Break down the score by constraint: Get the ConstraintMatchTotal values from ScoreExplanation : Collection<ConstraintMatchTotal<HardSoftScore>> constraintMatchTotals = scoreExplanation.getConstraintMatchTotalMap().values(); for (ConstraintMatchTotal<HardSoftScore> constraintMatchTotal : constraintMatchTotals) { String constraintName = constraintMatchTotal.getConstraintName(); // The score impact of that constraint HardSoftScore totalScore = constraintMatchTotal.getScore(); for (ConstraintMatch<HardSoftScore> constraintMatch : constraintMatchTotal.getConstraintMatchSet()) { List<Object> justificationList = constraintMatch.getJustificationList(); HardSoftScore score = constraintMatch.getScore(); ... } } Each ConstraintMatchTotal represents one constraint and has a part of the overall score. The sum of all the ConstraintMatchTotal.getScore() equals the overall score. Note Constraint streams and Drools score calculation support constraint matches automatically, but incremental Java score calculation requires implementing an extra interface. 13.12. Visualizing the hot planning entities Show a heat map in the UI that highlights the planning entities and problem facts that have an impact on the score. Procedure Get the Indictment map from the ScoreExplanation : Map<Object, Indictment<HardSoftScore>> indictmentMap = scoreExplanation.getIndictmentMap(); for (CloudProcess process : cloudBalance.getProcessList()) { Indictment<HardSoftScore> indictment = indictmentMap.get(process); if (indictment == null) { continue; } // The score impact of that planning entity HardSoftScore totalScore = indictment.getScore(); for (ConstraintMatch<HardSoftScore> constraintMatch : indictment.getConstraintMatchSet()) { String constraintName = constraintMatch.getConstraintName(); HardSoftScore score = constraintMatch.getScore(); ... } } Each Indictment is the sum of all constraints where that justification object is involved. The sum of all the Indictment.getScoreTotal() differs from the overall score because multiple Indictment entities can share the same ConstraintMatch . Note Constraint streams and Drools score calculation supports constraint matches automatically, but incremental Java score calculation requires implementing an extra interface. 13.13. Score constraints testing Different score calculation types come with different tools for testing. Write a unit test for each score constraint individually to check that it behaves correctly. | [
"public final class ConferenceSchedulingConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory factory) { return new Constraint[] { speakerConflict(factory), themeTrackConflict(factory), contentConflict(factory), }; } protected Constraint speakerConflict(ConstraintFactory factory) { return factory.forEachUniquePair(...) .penalizeConfigurable(\"Speaker conflict\", ...); } protected Constraint themeTrackConflict(ConstraintFactory factory) { return factory.forEachUniquePair(...) .penalizeConfigurable(\"Theme track conflict\", ...); } protected Constraint contentConflict(ConstraintFactory factory) { return factory.forEachUniquePair(...) .penalizeConfigurable(\"Content conflict\", ...); } }",
"@ConstraintWeight(\"Content conflict\") private HardMediumSoftScore contentConflict = HardMediumSoftScore.ofSoft(100);",
"Constraint contentConflict(ConstraintFactory factory) { return factory.forEachUniquePair(Talk.class, overlapping(t -> t.getTimeslot().getStartDateTime(), t -> t.getTimeslot().getEndDateTime()), filtering((talk1, talk2) -> talk1.overlappingContentCount(talk2) > 0)) .penalizeConfigurable(\"Content conflict\", (talk1, talk2) -> talk1.overlappingContentCount(talk2) * talk1.overlappingDurationInMinutes(talk2)); }",
"@ConstraintConfiguration public class ConferenceConstraintConfiguration { }",
"@PlanningSolution public class ConferenceSolution { @ConstraintConfigurationProvider private ConferenceConstraintConfiguration constraintConfiguration; }",
"@ConstraintConfiguration(constraintPackage = \"...conferencescheduling.score\") public class ConferenceConstraintConfiguration { @ConstraintWeight(\"Speaker conflict\") private HardMediumSoftScore speakerConflict = HardMediumSoftScore.ofHard(10); @ConstraintWeight(\"Theme track conflict\") private HardMediumSoftScore themeTrackConflict = HardMediumSoftScore.ofSoft(10); @ConstraintWeight(\"Content conflict\") private HardMediumSoftScore contentConflict = HardMediumSoftScore.ofSoft(100); }",
"System.out.println(scoreManager.getSummary(solution));",
"Explanation of score (-1hard/-806soft): Constraint match totals: -1hard: constraint (Speaker required room tag) has 1 matches: -1hard: justifications ([S51]) -340soft: constraint (Theme track conflict) has 32 matches: -20soft: justifications ([S68, S66]) -20soft: justifications ([S61, S44]) Indictments (top 5 of 72): -1hard/-22soft: justification (S51) has 12 matches: -1hard: constraint (Speaker required room tag) -10soft: constraint (Theme track conflict)",
"ScoreManager<CloudBalance, HardSoftScore> scoreManager = ScoreManager.create(solverFactory); ScoreExplanation<CloudBalance, HardSoftScore> scoreExplanation = scoreManager.explainScore(cloudBalance);",
"HardSoftScore score = scoreExplanation.getScore();",
"Collection<ConstraintMatchTotal<HardSoftScore>> constraintMatchTotals = scoreExplanation.getConstraintMatchTotalMap().values(); for (ConstraintMatchTotal<HardSoftScore> constraintMatchTotal : constraintMatchTotals) { String constraintName = constraintMatchTotal.getConstraintName(); // The score impact of that constraint HardSoftScore totalScore = constraintMatchTotal.getScore(); for (ConstraintMatch<HardSoftScore> constraintMatch : constraintMatchTotal.getConstraintMatchSet()) { List<Object> justificationList = constraintMatch.getJustificationList(); HardSoftScore score = constraintMatch.getScore(); } }",
"Map<Object, Indictment<HardSoftScore>> indictmentMap = scoreExplanation.getIndictmentMap(); for (CloudProcess process : cloudBalance.getProcessList()) { Indictment<HardSoftScore> indictment = indictmentMap.get(process); if (indictment == null) { continue; } // The score impact of that planning entity HardSoftScore totalScore = indictment.getScore(); for (ConstraintMatch<HardSoftScore> constraintMatch : indictment.getConstraintMatchSet()) { String constraintName = constraintMatch.getConstraintName(); HardSoftScore score = constraintMatch.getScore(); } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/score-calculation-performance-tricks-con_score-calculation |
Release notes for Red Hat build of OpenJDK 17.0.12 | Release notes for Red Hat build of OpenJDK 17.0.12 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.12/index |
Chapter 2. Configuring OpenStack's Keystone for the Ceph Object Gateway | Chapter 2. Configuring OpenStack's Keystone for the Ceph Object Gateway As a storage administrator, you can use OpenStack's Keystone authentication service to authenticate users through the Ceph Object Gateway. Before you can configure the Ceph Object Gateway, you must configure Keystone which will enable the Swift service and point to the Ceph Object Gateway. 2.1. Prerequisites A running Red Hat OpenStack Platform 13, 15, or 16 environment. A running Red Hat Ceph Storage environment. A running Ceph Object Gateway environment. 2.2. Creating the Swift service Before configuring the Ceph Object Gateway, configure Keystone so that the Swift service is enabled and pointing to the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to OpenStack controller node. Procedure Create the Swift service: Creating the service will echo the service settings. Table 2.1. Example Field Value description Swift Service enabled True id 37c4c0e79571404cb4644201a4a6e5ee name swift type object-store 2.3. Setting the Ceph Object Gateway endpoints After creating the Swift service, point the service to a Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. A running Swift service on a Red Hat OpenStack Platform 13, 15, or 16 environment. Procedure Create the OpenStack endpoints pointing to the Ceph Object Gateway: Syntax Replace REGION_NAME with the name of the gateway's zone group name or region name. Replace URL with URLs appropriate for the Ceph Object Gateway. Example Field Value adminurl http://radosgw.example.com:8080/swift/v1 id e4249d2b60e44743a67b5e5b38c18dd3 internalurl http://radosgw.example.com:8080/swift/v1 publicurl http://radosgw.example.com:8080/swift/v1 region us-west service_id 37c4c0e79571404cb4644201a4a6e5ee service_name swift service_type object-store Setting the endpoints will output the service endpoint settings. 2.4. Verifying Openstack is using the Ceph Object Gateway endpoints After creating the Swift service and setting the endpoints, show the endpoints to ensure that all settings are correct. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Procedure Verify settings in the configuration file: Showing the endpoints will echo the endpoints settings, and the service settings. Table 2.2. Example Field Value adminurl http://radosgw.example.com:8080/swift/v1 enabled True id e4249d2b60e44743a67b5e5b38c18dd3 internalurl http://radosgw.example.com:8080/swift/v1 publicurl http://radosgw.example.com:8080/swift/v1 region us-west service_id 37c4c0e79571404cb4644201a4a6e5ee service_name swift service_type object-store | [
"openstack service create --name=swift --description=\"Swift Service\" object-store",
"openstack endpoint create --region REGION_NAME swift admin \" URL \" openstack endpoint create --region REGION_NAME swift public \" URL \" openstack endpoint create --region REGION_NAME swift internal \" URL \"",
"openstack endpoint create --region us-west swift admin \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift public \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift internal \"http://radosgw.example.com:8080/swift/v1\"",
"openstack endpoint show object-store"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/using_keystone_with_the_ceph_object_gateway_guide/configuring-openstack-keystone-for-the-ceph-object-gateway |
Chapter 2. Software | Chapter 2. Software The Red Hat OpenStack Platform IaaS cloud works as a collection of interacting services that control compute, storage, and networking resources. You can manage the cloud with a web-based dashboard or command-line clients to control, provision, and automate OpenStack resources. OpenStack also has an extensive API that is also available to all cloud users. The following diagram provides a high-level overview of the OpenStack core services and their relationship with each other. The following table describes each component in the diagram and provides links for the component documentation section. Table 2.1. Core services Service Code Description Dashboard horizon Web browser-based dashboard that you use to manage OpenStack services. Identity keystone Centralized service for authentication and authorization of OpenStack services, and for managing users, projects, and roles. OpenStack Networking neutron Provides connectivity between the interfaces of OpenStack services. Load balancing service octavia Provides load-balancing services for the cloud. Block Storage cinder Manages persistent block storage volumes for virtual machines. Compute nova Manages and provisions virtual machines running on hypervisor nodes. Image glance Registry service to store resources such as virtual machine images and volume snapshots. Object Storage swift Stores and retrieves files and arbitrary data. Telemetry ceilometer Provides measurements of cloud resources. Orchestration heat Template-based orchestration engine that supports automatic creation of resource stacks. Each OpenStack service contains a functional group of Linux services and other components. For example, the glance-api and glance-registry Linux services, with a MariaDB database, implement the Image service. 2.1. Components This section describes each of the OpenStack components in some detail: OpenStack Dashboard (horizon) OpenStack Dashboard is a graphical user interface that you can use to create and launch instances, manage networking, and set access control. The Dashboard service includes the Project, Admin, and Settings default dashboards. It has a modular design to interface with other products such as billing, monitoring, and additional management tools. OpenStack Identity (keystone) OpenStack Identity provides user authentication and authorization to all OpenStack components. Identity supports multiple authentication mechanisms, including username and password credentials, token-based systems, and AWS-style log-ins. OpenStack Networking (neutron) OpenStack Networking handles creation and management of a virtual networking infrastructure in the OpenStack cloud. Infrastructure elements include networks, subnets, and routers. Load-balancing service (octavia) The OpenStack Load-balancing service (octavia) provides a Load Balancing-as-a-Service (LBaaS) implementation for Red Hat OpenStack Platform director installations. To achieve load balancing, octavia supports enabling multiple provider drivers. The reference provider driver (Amphora provider driver) is an open-source, scalable, and highly available load balancing provider. It accomplishes its delivery of load balancing services by managing a fleet of virtual machines - collectively known as amphorae - which it spins up on demand. OpenStack Block Storage (cinder) OpenStack Block Storage provides persistent block storage management for virtual hard drives. You can use Block Storage to create and delete block devices, and to manage attachment of block devices to servers. OpenStack Compute (nova) OpenStack Compute serves as the core of the OpenStack cloud by providing virtual machines on demand. Compute schedules virtual machines to run on a set of nodes by defining drivers that interact with underlying virtualization mechanisms, and by exposing the functionality to the other OpenStack components. OpenStack Image Service (glance) OpenStack Image acts as a registry for virtual disk images. Users can add new images or take a snapshot of an existing server for immediate storage. You can use the snapshots for backup or as templates for new servers. OpenStack Object Storage (swift) Object Storage provides an HTTP-accessible storage system for large amounts of data, including static entities such as videos, images, email messages, files, or VM images. Objects are stored as binaries on the underlying file system along with metadata stored in the extended attributes of each file. OpenStack Telemetry (ceilometer) OpenStack Telemetry provides user-level usage data for OpenStack-based clouds. The data can be used for customer billing, system monitoring, or alerts. Telemetry can collect data from notifications sent by existing OpenStack components such as Compute usage events, or by polling OpenStack infrastructure resources such as libvirt. OpenStack Orchestration (heat) OpenStack Orchestration provides templates to create and manage cloud resources such as storage, networking, instances, or applications. Use templates to create stacks, which are collections of resources. OpenStack Bare Metal Provisioning (ironic) Use OpenStack Bare Metal Provisioning to provision physical or bare metal machines for a variety of hardware vendors with hardware-specific drivers. Bare Metal Provisioning integrates with the Compute service to provision the bare metal machines in the same way that virtual machines are provisioned, and provides a solution for the bare-metal-to-trusted-project use case. OpenStack Shared-Filesystems-as-a-Service (manila) OpenStack Shared File Systems service provides shared file systems that Compute instances can use. The basic resources of the Shared File Systems are shares, snapshots, and share networks. OpenStack Key Manager Service (barbican) OpenStack Key Manager Service is a REST API designed for the secure storage, provisioning and management of secrets such as passwords, encryption keys, and X.509 Certificates. This includes keying material such as Symmetric Keys, Asymmetric Keys, Certificates, and raw binary data. Red Hat OpenStack Platform director The Red Hat OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for "OpenStack-On-OpenStack". This project takes advantage of OpenStack components to install a fully-operational OpenStack environment. It includes new OpenStack components that provision and control bare metal systems to use as OpenStack nodes. It provides a simple method for installing a complete Red Hat OpenStack Platform environment. The Red Hat OpenStack Platform director uses two main concepts: an undercloud and an overcloud. The undercloud installs and configures the overcloud. OpenStack High Availability To keep your OpenStack environment up and running efficiently, you can use the director to create configurations that offer high availability and load balancing across all major services in Red Hat OpenStack Platform. OpenStack Operational Tools Red Hat OpenStack Platform comes with an optional suite of tools, such as Centralized Logging, Availability Monitoring, and Performance Monitoring. You can use these tools to maintain your OpenStack environment. 2.2. Integration You can integrate Red Hat OpenStack Platform with the following third-party software: Tested and Approved Software 2.3. Installation summary Red Hat supports the installation of Red Hat OpenStack Platform using the following methods: Red Hat OpenStack Platform director : Recommended for enterprise deployments. For more information, see Red Hat OpenStack Platform Director Installation and Usage . packstack : packstack is a deployment that consists of a public network and a private network on one machine, hosting one CirrOS-image instance, with an attached storage volume. Installed OpenStack services include: Block Storage, Compute, Dashboard, Identity, Image, OpenStack Networking, Object Storage, and Telemetry. Packstack is a command-line utility that rapidly deploys Red Hat OpenStack Platform. Note Packstack deployments are intended only for POC-type testing environments and are not suitable for production. By default, the public network is only routable from the OpenStack host. For more information, see Evaluating OpenStack: Single-Node Deployment . For a comparison of these installation options, see Installing and Managing Red Hat OpenStack Platform . 2.4. Subscriptions To install Red Hat OpenStack Platform, you must register all systems in the OpenStack environment with Red Hat Subscription Manager, and subscribe to the required channels. The guides listed below detail the channels and repositories you must subscribe to before deploying Red Hat OpenStack Platform. Requirements for installing using director in the Director Installation and Usage guide. Requirements for installing a single-node POC deployment | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/product_guide/ch-rhosp-software |
2.2. Command Line Configuration | 2.2. Command Line Configuration In case your system does not have the Date/Time Properties tool installed, or the X Window Server is not running, you will have to change the system date and time on the command line. Note that in order to perform actions described in this section, you have to be logged in as a superuser: 2.2.1. Date and Time Setup The date command allows the superuser to set the system date and time manually: Change the current date. Type the command in the following form at a shell prompt, replacing the YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month: For example, to set the date to 2 June 2010, type: Change the current time. Use the following command, where HH stands for an hour, MM is a minute, and SS is a second, all typed in a two-digit form: If your system clock is set to use UTC (Coordinated Universal Time), add the following option: For instance, to set the system clock to 11:26 PM using the UTC , type: You can check your current settings by typing date without any additional argument: Example 2.1. Displaying the current date and time | [
"~]USD su - Password:",
"~]# date +%D -s YYYY-MM-DD",
"~]# date +%D -s 2010-06-02",
"~]# date +%T -s HH:MM:SS",
"~]# date +%T -s HH:MM:SS -u",
"~]# date +%T -s 23:26:00 -u",
"~]USD date Wed Jun 2 11:58:48 CEST 2010"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-Date_and_Time_Configuration-Command_Line_Configuration |
Chapter 9. Reinstalling GRUB | Chapter 9. Reinstalling GRUB You can reinstall the GRUB boot loader to fix certain problems, usually caused by an incorrect installation of GRUB, missing files, or a broken system. You can resolve these issues by restoring the missing files and updating the boot information. Reasons to reinstall GRUB: Upgrading the GRUB boot loader packages. Adding the boot information to another drive. The user requires the GRUB boot loader to control installed operating systems. However, some operating systems are installed with their own boot loaders and reinstalling GRUB returns control to the desired operating system. Note GRUB restores files only if they are not corrupted. 9.1. Reinstalling GRUB on BIOS-based machines You can reinstall the GRUB boot loader on your BIOS-based system. Always reinstall GRUB after updating the GRUB packages. Important This overwrites the existing GRUB to install the new GRUB. Ensure that the system does not cause data corruption or boot crash during the installation. Procedure Reinstall GRUB on the device where it is installed. For example, if sda is your device: Reboot your system for the changes to take effect: Additional resources grub-install(1) man page on your system 9.2. Reinstalling GRUB on UEFI-based machines You can reinstall the GRUB boot loader on your UEFI-based system. Important Ensure that the system does not cause data corruption or boot crash during the installation. Procedure Reinstall the grub2-efi and shim boot loader files: Reboot your system for the changes to take effect: 9.3. Reinstalling GRUB on IBM Power machines You can reinstall the GRUB boot loader on the Power PC Reference Platform (PReP) boot partition of your IBM Power system. Always reinstall GRUB after updating the GRUB packages. Important This overwrites the existing GRUB to install the new GRUB. Ensure that the system does not cause data corruption or boot crash during the installation. Procedure Determine the disk partition that stores GRUB: Reinstall GRUB on the disk partition: Replace partition with the identified GRUB partition, such as /dev/sda1 . Reboot your system for the changes to take effect: Additional resources grub-install(1) man page on your system 9.4. Resetting GRUB Resetting GRUB completely removes all GRUB configuration files and system settings, and reinstalls the boot loader. You can reset all the configuration settings to their default values, and therefore fix failures caused by corrupted files and invalid configuration. Important The following procedure will remove all the customization made by the user. Procedure Remove the configuration files: Reinstall packages. On BIOS-based machines: On UEFI-based machines: Rebuild the grub.cfg file for the changes to take effect. On BIOS-based machines: On UEFI-based machines: Follow Reinstalling GRUB procedure to restore GRUB on the /boot/ partition. | [
"grub2-install /dev/sda",
"reboot",
"yum reinstall grub2-efi shim",
"reboot",
"bootlist -m normal -o sda1",
"grub2-install partition",
"reboot",
"rm /etc/grub.d/ * rm /etc/sysconfig/grub",
"yum reinstall grub2-tools",
"yum reinstall grub2-efi shim grub2-tools",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/assembly_reinstalling-grub_managing-monitoring-and-updating-the-kernel |
Chapter 3. Creating applications | Chapter 3. Creating applications 3.1. Creating applications using the Developer perspective The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform: Getting started resources : Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Build with guided documentation : Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies. Explore new developer features : Explore the new features and resources within the Developer perspective. Developer catalog : Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project. All Services : Browse the catalog to discover services across OpenShift Container Platform. Database : Select the required database service and add it to your application. Operator Backed : Select and deploy the required Operator-managed service. Helm chart : Select the required Helm chart to simplify deployment of applications and services. Devfile : Select a devfile from the Devfile registry to declaratively define a development environment. Event Source : Select an event source to register interest in a class of events from a particular system. Note The Managed services option is also available if the RHOAS Operator is installed. Git repository : Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git , From Devfile , or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform. Container images : Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform. Pipelines : Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform. Serverless : Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform. Channel : Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations. Samples : Explore the available sample applications to create, build, and deploy an application quickly. Quick Starts : Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks. From Local Machine : Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily. Import YAML : Upload a YAML file to create and define resources for building and deploying applications. Upload JAR file : Upload a JAR file to build and deploy Java applications. Note that certain options, such as Pipelines , Event Source , and Import Virtual Machines , are displayed only when the OpenShift Pipelines Operator , OpenShift Serverless Operator , and OpenShift Virtualization Operator are installed, respectively. 3.1.1. Prerequisites To create applications using the Developer perspective ensure that: You have logged in to the web console . You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. To create serverless applications, in addition to the preceding prerequisites, ensure that: You have installed the OpenShift Serverless Operator . You have created a KnativeServing resource in the knative-serving namespace . 3.1.2. Creating Sample applications You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click on the Samples tile to see the Samples page. On the Samples page, select one of the available sample applications to see the Create Sample Application form. In the Create Sample Application Form : In the Name field, the deployment name is displayed by default. You can modify this name as required. In the Builder Image Version , a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list. A sample Git repository URL is added by default. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application. 3.1.3. Creating applications using Quick Starts The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the View all quick starts link to view the Quick Starts page. In the Quick Starts page, click the tile for the quick start that you want to use. Click Start to begin the quick start. 3.1.4. Importing a codebase from Git to create an application You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub. The following procedure walks you through the From Git option in the Developer perspective to create an application. Procedure In the +Add view, click From Git in the Git Repository tile to see the Import from git form. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex . The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as: Git Reference to point to code in a specific branch, tag, or commit to be used to build the application. Context Dir to specify the subdirectory for the application source code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. Optional: You can import a devfile, a Dockerfile, or a builder image through your Git repository to further customize your deployment. If your Git repository contains a devfile, a Dockerfile, or a builder image, it is automatically detected and populated on the respective path fields. If a devfile, a Dockerfile, and a builder image are detected in the same repository, the devfile is selected by default. To edit the file import type and select a different strategy, click Edit import strategy option. If multiple devfiles, Dockerfiles, or builder images are detected, to import a specific devfile, Dockerfile, or a builder image, specify the respective paths relative to the context directory. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected. Optional: Use the Builder Image Version drop-down to specify a version. Optional: Use the Edit import strategy to select a different strategy. Optional: For the Node.js builder image, use the Run command field to override the command to run the application. In the General section: In the Application field, enter a unique name for the application grouping, for example, myapp . Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned. Note The resource name must be unique in a namespace. Modify the resource name if you get an error. In the Resources section, select: Deployment , to create an application in plain Kubernetes style. Deployment Config , to create an OpenShift Container Platform style application. Serverless Deployment , to create a Knative service. Note The Serverless Deployment option is displayed in the Import from git form only if the OpenShift Serverless Operator is installed in your cluster. For further details, refer to the OpenShift Serverless documentation. In the Pipelines section, select Add Pipeline , and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application. Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL. If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose. Optional: You can use the following advanced options to further customize your application: Routing By clicking the Routing link, you can perform the following actions: Customize the hostname for the route. Specify the path the router watches. Select the target port for the traffic from the drop-down list. Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists. Note For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used. Domain mapping If you are creating a Serverless Deployment , you can add a custom domain mapping to the Knative service during creation. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Health Checks Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required. To customize the health probes: Click Add Readiness Probe , if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe. Click Add Liveness Probe , if required, modify the parameters to check if a container is still running, and select the check mark to add the probe. Click Add Startup Probe , if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe. For each of the probes, you can specify the request type - HTTP GET , Container Command , or TCP Socket , from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value. Build Configuration and Deployment Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource. Scaling Click the Scaling link to define the number of pods or instances of the application you want to deploy initially. If you are creating a serverless deployment, you can also configure the following settings: Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting. Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting. Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time. Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time. Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic. Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s . This is also known as the stable window. Resource Limit Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Labels Click the Labels link to add custom labels to your application. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view. 3.1.5. Deploying a Java application by uploading a JAR file You can use the web console Developer perspective to upload a JAR file by using the following options: Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application. Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application. Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application. Prerequisites The Cluster Samples Operator must be installed by a cluster administrator. You have access to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the Topology view, right-click anywhere to view the Add to Project menu. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling. In the Name field, enter a unique component name to name the associated resources. In the Resources field, choose the resource type for your application. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs. Note If you attempt to close the browser tab while the build is running, a web alert is displayed. After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view. 3.1.6. Using the Devfile registry to access devfiles You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry . A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry , you can use a pre-configured devfile to create an application. Procedure Navigate to Developer Perspective +Add Developer Catalog All Services . A list of all the available services in the Developer Catalog is displayed. Under All Services , select Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile. Click Create to create an application and view the application in the Topology view. 3.1.7. Using the Developer Catalog to add services or components to your application You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog. Procedure In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog . Under All Services , select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view. Figure 3.1. MariaDB in Topology 3.1.8. Additional resources For more information about Knative routing settings for OpenShift Serverless, see Routing . For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service . For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling . For more information about adding a new user to a project, see Working with projects . For more information about creating a Helm Chart repository, see Creating Helm Chart repositories . 3.2. Creating applications from installed Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator. This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. Additional resources See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform. 3.2.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.10 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The screen allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click on the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.3. Creating applications using the CLI You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI. The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates. 3.3.1. Creating an application from source code With the new-app command you can create applications from source code in a local or remote Git repository. The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image. OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image. 3.3.1.1. Local To create an application from a Git repository in a local directory: USD oc new-app /<path to source code> Note If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app command will create a binary build. 3.3.1.2. Remote To create an application from a remote Git repository: USD oc new-app https://github.com/sclorg/cakephp-ex To create an application from a private remote Git repository: USD oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret Note If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository. You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory: USD oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL: USD oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 3.3.1.3. Build strategy detection OpenShift Container Platform automatically determines which build strategy to use by detecting certain files: If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy. Note The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead. If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy. If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy. Override the automatically detected build strategy by setting the --strategy flag to docker , pipeline , or source . USD oc new-app /home/user/code/myapp --strategy=docker Note The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v . 3.3.1.4. Language detection If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository: Table 3.1. Languages detected by new-app Language Files dotnet project.json , *.csproj jee pom.xml nodejs app.json , package.json perl cpanfile , index.pl php composer.json , index.php python requirements.txt , setup.py ruby Gemfile , Rakefile , config.ru scala build.sbt golang Godeps , main.go After a language is detected, new-app searches the OpenShift Container Platform server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name. You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out. For example, to use the myproject/my-ruby imagestream with the source in a remote repository: USD oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository: USD oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app Note Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax. The -i <image> <repository> invocation requires that new-app attempt to clone repository to determine what type of artifact it is, so this will fail if Git is not available. The -i <image> --code <repository> invocation requires new-app clone repository to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image. 3.3.2. Creating an application from an image You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server. The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an image stream using the -i|--image-stream argument. Note If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes. 3.3.2.1. Docker Hub MySQL image Create an application from the Docker Hub MySQL image, for example: USD oc new-app mysql 3.3.2.2. Image in a private registry Create an application using an image in a private registry, specify the full container image specification: USD oc new-app myregistry:5000/example/myimage 3.3.2.3. Existing image stream and optional image stream tag Create an application from an existing image stream and optional image stream tag: USD oc new-app my-stream:v1 3.3.3. Creating an application from a template You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application. Upload an application template to your current project's template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json : USD oc create -f examples/sample-app/application-template-stibuild.json Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample : USD oc new-app ruby-helloworld-sample To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the -f|--file argument. For example: USD oc new-app -f examples/sample-app/application-template-stibuild.json 3.3.3.1. Template parameters When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template: USD oc new-app ruby-helloworld-sample \ -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=- . The following is an example file called helloworld.params : ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword Reference the parameters in the file when instantiating a template: USD oc new-app ruby-helloworld-sample --param-file=helloworld.params 3.3.4. Modifying application creation The new-app command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior. Table 3.2. new-app output objects Object Description BuildConfig A BuildConfig object is created for each source repository that is specified in the command line. The BuildConfig object specifies the strategy to use, the source location, and the build output location. ImageStreams For the BuildConfig object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app , then an image stream is created for that image as well. DeploymentConfig A DeploymentConfig object is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig object . Service The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after new-app has completed, simply use the oc expose command to generate additional services. Other Other objects can be generated when instantiating templates, according to the template. 3.3.4.1. Specifying environment variables When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time: USD oc new-app openshift/postgresql-92-centos7 \ -e POSTGRESQL_USER=user \ -e POSTGRESQL_DATABASE=db \ -e POSTGRESQL_PASSWORD=password The variables can also be read from file using the --env-file argument. The following is an example file called postgresql.env : POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password Read the variables from the file: USD oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env Additionally, environment variables can be given on standard input by using --env-file=- : USD cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=- Note Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument. 3.3.4.2. Specifying build environment variables When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time: USD oc new-app openshift/ruby-23-centos7 \ --build-env HTTP_PROXY=http://myproxy.net:1337/ \ --build-env GEM_HOME=~/.gem The variables can also be read from a file using the --build-env-file argument. The following is an example file called ruby.env : HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem Read the variables from the file: USD oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env Additionally, environment variables can be given on standard input by using --build-env-file=- : USD cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=- 3.3.4.3. Specifying labels When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application. USD oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world 3.3.4.4. Viewing the output without creation To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the OpenShift Container Platform objects. To output new-app artifacts to a file, run the following: USD oc new-app https://github.com/openshift/ruby-hello-world \ -o yaml > myapp.yaml Edit the file: USD vi myapp.yaml Create a new application by referencing the file: USD oc create -f myapp.yaml 3.3.4.5. Creating objects with different names Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command: USD oc new-app https://github.com/openshift/ruby-hello-world --name=myapp 3.3.4.6. Creating objects in a different project Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument: USD oc new-app https://github.com/openshift/ruby-hello-world -n myproject 3.3.4.7. Creating multiple objects The new-app command allows creating multiple applications specifying multiple parameters to new-app . Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images. To create an application from a source repository and a Docker Hub image: USD oc new-app https://github.com/openshift/ruby-hello-world mysql Note If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator. 3.3.4.8. Grouping images and source in a single pod The new-app command allows deploying multiple images together in a single pod. To specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group: USD oc new-app ruby+mysql To deploy an image built from source and an external image together: USD oc new-app \ ruby~https://github.com/openshift/ruby-hello-world \ mysql \ --group=ruby+mysql 3.3.4.9. Searching for images, templates, and other inputs To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP: USD oc new-app --search php | [
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/creating-applications |
Chapter 35. Defining enumerations for drop-down lists in rule assets | Chapter 35. Defining enumerations for drop-down lists in rule assets Enumeration definitions in Business Central determine the possible values of fields for conditions or actions in guided rules, guided rule templates, and guided decision tables. An enumeration definition contains a fact.field mapping to a list of supported values that are displayed as a drop-down list in the relevant field of a rule asset. When a user selects a field that is based on the same fact and field as the enumeration definition, the drop-down list of defined values is displayed. You can define enumerations in Business Central or in the DRL source for your Red Hat Process Automation Manager project. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Enumeration . Enter an informative Enumeration name and select the appropriate Package . The package that you specify must be the same package where the required data objects and relevant rule assets have been assigned or will be assigned. Click Ok to create the enumeration. The new enumeration is now listed in the Enumeration Definitions panel of the Project Explorer . In the Model tab of the enumerations designer, click Add enum and define the following values for the enumeration: Fact : Specify an existing data object within the same package of your project with which you want to associate this enumeration. Open the Data Objects panel in the Project Explorer to view the available data objects, or create the relevant data object as a new asset if needed. Field : Specify an existing field identifier that you defined as part of the data object that you selected for the Fact . Open the Data Objects panel in the Project Explorer to select the relevant data object and view the list of available Identifier options. You can create the relevant identifier for the data object if needed. Context : Specify a list of values in the format ['string1','string2','string3'] or [integer1,integer2,integer3] that you want to map to the Fact and Field definitions. These values will be displayed as a drop-down list for the relevant field of the rule asset. For example, the following enumeration defines the drop-down values for applicant credit rating in a loan application decision service: Figure 35.1. Example enumeration for applicant credit rating in Business Central Example enumeration for applicant credit rating in the DRL source In this example, for any guided rule, guided rule template, or guided decision table that is in the same package of the project and that uses the Applicant data object and the creditRating field, the configured values are available as drop-down options: Figure 35.2. Example enumeration drop-down options in a guided rule or guided rule template Figure 35.3. Example enumeration drop-down options in a guided decision table 35.1. Advanced enumeration options for rule assets For advanced use cases with enumeration definitions in your Red Hat Process Automation Manager project, consider the following extended options for defining enumerations: Mapping between DRL values and values in Business Central If you want the enumeration values to appear differently or more completely in the Business Central interface than they appear in the DRL source, use a mapping in the format 'fact.field' : ['sourceValue1=UIValue1','sourceValue2=UIValue2', ... ] for your enumeration definition values. For example, in the following enumeration definition for loan status, the options A or D are used in the DRL file but the options Approved or Declined are displayed in Business Central: Enumeration value dependencies If you want the selected value in one drop-down list to determine the available options in a subsequent drop-down list, use the format 'fact.fieldB[fieldA=value1]' : ['value2', 'value3', ... ] for your enumeration definition. For example, in the following enumeration definition for insurance policies, the policyType field accepts the values Home or Car . The type of policy that the user selects determines the policy coverage field options that are then available: Note Enumeration dependencies are not applied across rule conditions and actions. For example, in this insurance policy use case, the selected policy in the rule condition does not determine the available coverage options in the rule actions, if applicable. External data sources in enumerations If you want to retrieve a list of enumeration values from an external data source instead of defining the values directly in the enumeration definition, on the class path of your project, add a helper class that returns a java.util.List list of strings. In the enumeration definition, instead of specifying a list of values, identify the helper class that you configured to retrieve the values externally. For example, in the following enumeration definition for loan applicant region, instead of defining applicant regions explicitly in the format 'Applicant.region' : ['country1', 'country2', ... ] , the enumeration uses a helper class that returns the list of values defined externally: In this example, a DataHelper class contains a getListOfRegions() method that returns a list of strings. The enumerations are loaded in the drop-down list for the relevant field in the rule asset. You can also load dependent enumeration definitions dynamically from a helper class by identifying the dependent field as usual and enclosing the call to the helper class within quotation marks: If you want to load all enumeration data entirely from an external data source, such as a relational database, you can implement a Java class that returns a Map<String, List<String>> map. The key of the map is the fact.field mapping and the value is a java.util.List<String> list of values. For example, the following Java class defines loan applicant regions for the related enumeration: public class SampleDataSource { public Map<String, List<String>> loadData() { Map data = new HashMap(); List d = new ArrayList(); d.add("AU"); d.add("DE"); d.add("ES"); d.add("UK"); d.add("US"); ... data.put("Applicant.region", d); return data; } } The following enumeration definition correlates to this example Java class. The enumeration contains no references to fact or field names because they are defined in the Java class: The = operator enables Business Central to load all enumeration data from the helper class. The helper methods are statically evaluated when the enumeration definition is requested for use in an editor. Note Defining an enumeration without a fact and field definition is currently not supported in Business Central. To define the enumeration for the associated Java class in this way, use the DRL source in your Red Hat Process Automation Manager project. | [
"'Applicant.creditRating' : ['AA', 'OK', 'Sub prime']",
"'Loan.status' : ['A=Approved','D=Declined']",
"'Insurance.policyType' : ['Home', 'Car'] 'Insurance.coverage[policyType=Home]' : ['property', 'liability'] 'Insurance.coverage[policyType=Car]' : ['collision', 'fullCoverage']",
"'Applicant.region' : (new com.mycompany.DataHelper()).getListOfRegions()",
"'Applicant.region[countryCode]' : '(new com.mycompany.DataHelper()).getListOfRegions(\"@{countryCode}\")'",
"public class SampleDataSource { public Map<String, List<String>> loadData() { Map data = new HashMap(); List d = new ArrayList(); d.add(\"AU\"); d.add(\"DE\"); d.add(\"ES\"); d.add(\"UK\"); d.add(\"US\"); data.put(\"Applicant.region\", d); return data; } }",
"=(new SampleDataSource()).loadData()"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/enumerations-define-proc_guided-decision-tables |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/configuring_advanced_cryostat_configurations/making-open-source-more-inclusive |
Machine APIs | Machine APIs OpenShift Container Platform 4.15 Reference guide for machine APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_apis/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/feedback_configuring-cost-optimized-sap-v9 |
Chapter 4. Changing basic environment settings | Chapter 4. Changing basic environment settings Configuration of basic environment settings is a part of the installation process. The following sections guide you when you change them later. The basic configuration of the environment includes: Date and time System locales Keyboard layout Language 4.1. Configuring the date and time Accurate timekeeping is important for several reasons. In Red Hat Enterprise Linux, timekeeping is ensured by the NTP protocol, which is implemented by a daemon running in user space. The user-space daemon updates the system clock running in the kernel. The system clock can keep time by using various clock sources. Red Hat Enterprise Linux 9 and later versions use the chronyd daemon to implement NTP . chronyd is available from the chrony package. For more information, see Using the chrony suite to configure NTP . 4.1.1. Manually configuring the date, time, and timezone settings To display the current date and time, use either of these steps. Procedure Optional: List the timezones: Set the time zone: Set the date and time: Verification Display the date, time, and timezone: To see more details, use the timedatectl command: Additional resources date(1) and timedatectl(1) man pages 4.2. Configuring time settings by using the web console You can set a time zone and synchronize the system time with a Network Time Protocol (NTP) server in the RHEL web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click the current system time in Overview . Click System time . In the Change System Time dialog box, change the time zone if necessary. In the Set Time drop-down menu, select one of the following: Manually Use this option if you need to set the time manually, without an NTP server. Automatically using NTP server This is a default option, which synchronizes time automatically with the preset NTP servers. Automatically using specific NTP servers Use this option only if you need to synchronize the system with a specific NTP server. Specify the DNS name or the IP address of the server. Click Change . Verification Check the system time displayed in the System tab. Additional resources Using the Chrony suite to configure NTP 4.3. Configuring the system locale System-wide locale settings are stored in the /etc/locale.conf file that is read at early boot by the systemd daemon. Every service or user inherits the locale settings configured in /etc/locale.conf , unless individual programs or individual users override them. Procedure Optional: Display the current system locales settings: List available system locale settings: Update the syste locale setting: For example: + Note The GNOME Terminal does not support non-UTF8 system locales. For more information, see the Red Hat Knowledgebase solution The gnome-terminal application fails to start when the system locale is set to non-UTF8 . Additional resources man localectl(1) , man locale(7) , and man locale.conf(5) 4.4. Configuring the keyboard layout The keyboard layout settings control the layout used on the text console and graphical user interfaces. Procedure To list available keymaps: To display the current status of keymaps settings: To set or change the default system keymap. For example: Additional resources man localectl(1) , man locale(7) , and man locale.conf(5) man pages 4.5. Changing the font size in text console mode You can change the font size in the virtual console. Procedure Display the currently-used font file: List the available font files: Select a font file that supports your character set and code page. Optional: To test a font file, load it temporarily: The setfont utility applies the font file immediately and terminals use the new and font size until you reboot or apply a different font file. To return to the font file defined in /etc/vconsole.conf , enter setfont without any parameters. Edit the /etc/vconsole.conf file and set the FONT variable to the font file RHEL should load at boot time, for example: Reboot the host | [
"timedatectl list-timezones Europe/Berlin",
"timedatectl set-timezone <time_zone>",
"timedatectl set-time <YYYY-mm-dd HH:MM-SS>",
"date Mon Mar 30 16:02:59 CEST 2020",
"timedatectl Local time: Mon 2020-03-30 16:04:42 CEST Universal time: Mon 2020-03-30 14:04:42 UTC RTC time: Mon 2020-03-30 14:04:41 Time zone: Europe/Prague (CEST, +0200) System clock synchronized: yes NTP service: active RTC in local TZ: no",
"localectl status System Locale: LANG=en_US.UTF-8 VC Keymap: de-nodeadkeys X11 Layout: de X11 Variant: nodeadkeys",
"localectl list-locales C.UTF-8 en_US.UTF-8 en_ZA.UTF-8 en_ZW.UTF-8",
"localectl set-locale LANG= en_US .UTF-8",
"localectl list-keymaps ANSI-dvorak al al-plisi amiga-de amiga-us",
"localectl status VC Keymap: us",
"localectl set-keymap us",
"cat /etc/vconsole.conf FONT=\"eurlatgr\"",
"ls -1 /usr/lib/kbd/consolefonts/*.psfu.gz /usr/lib/kbd/consolefonts/eurlatgr.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-08.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-14.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-16.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-16+.psfu.gz /usr/lib/kbd/consolefonts/LatArCyrHeb-19.psfu.gz",
"setfont LatArCyrHeb-16.psfu.gz",
"FONT=LatArCyrHeb-16",
"reboot"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_basic_system_settings/assembly_changing-basic-environment-settings_configuring-basic-system-settings |
Console APIs | Console APIs OpenShift Container Platform 4.17 Reference guide for console APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/console_apis/index |
C.6. Accessing Graphical Applications Remotely | C.6. Accessing Graphical Applications Remotely It is possible to access graphical applications on a remote server using these methods: You can start a separate application directly from your SSH session in your local X server. For that, you need to enable X11 forwarding. See Section 14.5.1, "X11 Forwarding" for details. You can run the whole X session over network using VNC. This method can be useful, especially when you are using a workstation without X server, for example, a non-Linux system. See Chapter 15, TigerVNC for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-x-accessing_graphical_applications_remotely |
Chapter 3. Installing the core components of Service Telemetry Framework | Chapter 3. Installing the core components of Service Telemetry Framework You can use Operators to load the Service Telemetry Framework (STF) components and objects. Operators manage each of the following STF core components: Certificate Management AMQ Interconnect Smart Gateways Prometheus and Alertmanager Service Telemetry Framework (STF) uses other supporting Operators as part of the deployment. STF can resolve most dependencies automatically, but you need to pre-install some Operators, such as Cluster Observability Operator, which provides an instance of Prometheus and Alertmanager, and cert-manager for Red Hat OpenShift, which provides management of certificates. Prerequisites An Red Hat OpenShift Container Platform Extended Update Support (EUS) release version 4.14 or 4.16 is running. You have prepared your Red Hat OpenShift Container Platform environment and ensured that there is persistent storage and enough resources to run the STF components on top of the Red Hat OpenShift Container Platform environment. For more information about STF performance, see the Red Hat Knowledge Base article Service Telemetry Framework Performance and Scaling . You have deployed STF in a fully connected or Red Hat OpenShift Container Platform-disconnected environments. STF is unavailable in network proxy environments. Important STF is compatible with Red Hat OpenShift Container Platform versions 4.14 and 4.16. Additional resources For more information about Operators, see the Understanding Operators guide. For more information about Operator catalogs, see Red Hat-provided Operator catalogs . For more information about the cert-manager Operator for Red Hat, see cert-manager Operator for Red Hat OpenShift overview . For more information about Cluster Observability Operator, see Cluster Observability Operator Overview . For more information about OpenShift life cycle policy and Extended Update Support (EUS), see Red Hat OpenShift Container Platform Life Cycle Policy . 3.1. Deploying Service Telemetry Framework to the Red Hat OpenShift Container Platform environment Deploy Service Telemetry Framework (STF) to collect and store Red Hat OpenStack Platform (RHOSP) telemetry. 3.1.1. Deploying Cluster Observability Operator You must install the Cluster Observability Operator (COO) before you create an instance of Service Telemetry Framework (STF) if the observabilityStrategy is set to use_redhat and the backends.metrics.prometheus.enabled is set to true in the ServiceTelemetry object. For more information about COO, see Cluster Observability Operator overview in the OpenShift Container Platform Documentation . Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. To store metrics in Prometheus, enable the Cluster Observability Operator by using the redhat-operators CatalogSource: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-observability-operator namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: cluster-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Verify that the ClusterServiceVersion for Cluster Observability Operator has a status of Succeeded : USD oc wait --for jsonpath="{.status.phase}"=Succeeded csv --namespace=openshift-operators -l operators.coreos.com/cluster-observability-operator.openshift-operators clusterserviceversion.operators.coreos.com/observability-operator.v0.0.26 condition met 3.1.2. Deploying cert-manager for Red Hat OpenShift The cert-manager for Red Hat OpenShift (cert-manager) Operator must be pre-installed before creating an instance of Service Telemetry Framework (STF). For more information about cert-manager, see cert-manager for Red Hat OpenShift overview . In versions of STF, the only available cert-manager channel was tech-preview which is available until Red Hat OpenShift Container Platform v4.12. Installations of cert-manager on versions of Red Hat OpenShift Container Platform v4.14 and later must be installed from the stable-v1 channel. For new installations of STF it is recommended to install cert-manager from the stable-v1 channel. Warning Only one deployment of cert-manager can be installed per Red Hat OpenShift Container Platform cluster. Subscribing to cert-manager in more than one project causes the deployments to conflict with each other. Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. Verify cert-manager is not already installed on the Red Hat OpenShift Container Platform cluster. If any results are returned, do not install another instance of cert-manager: USD oc get sub --all-namespaces -o json | jq '.items[] | select(.metadata.name | match("cert-manager")) | .metadata.name' Create a namespace for the cert-manager Operator: USD oc create -f - <<EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: cert-manager-operator spec: finalizers: - kubernetes EOF Create an OperatorGroup for the cert-manager Operator: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - cert-manager-operator upgradeStrategy: Default EOF Subscribe to the cert-manager Operator by using the redhat-operators CatalogSource: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator labels: operators.coreos.com/openshift-cert-manager-operator.cert-manager-operator: "" spec: channel: stable-v1 installPlanApproval: Automatic name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Validate your ClusterServiceVersion. Ensure that cert-manager Operator displays a phase of Succeeded : oc wait --for jsonpath="{.status.phase}"=Succeeded csv --namespace=cert-manager-operator --selector=operators.coreos.com/openshift-cert-manager-operator.cert-manager-operator clusterserviceversion.operators.coreos.com/cert-manager-operator.v1.12.1 condition met 3.1.3. Deploying Service Telemetry Operator Deploy Service Telemetry Operator on Red Hat OpenShift Container Platform to provide the supporting Operators and interface for creating an instance of Service Telemetry Framework (STF) to monitor Red Hat OpenStack Platform (RHOSP) cloud platforms. Prerequisites You have installed Cluster Observability Operator to allow storage of metrics. For more information, see Section 3.1.1, "Deploying Cluster Observability Operator" . You have installed cert-manager for Red Hat OpenShift to allow certificate management. For more information, see Section 3.1.2, "Deploying cert-manager for Red Hat OpenShift" . Procedure Create a namespace to contain the STF components, for example, service-telemetry : USD oc new-project service-telemetry Create an OperatorGroup in the namespace so that you can schedule the Operator pods: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: service-telemetry-operator-group namespace: service-telemetry spec: targetNamespaces: - service-telemetry EOF For more information, see OperatorGroups . Create the Service Telemetry Operator subscription to manage the STF instances: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: service-telemetry-operator namespace: service-telemetry spec: channel: stable-1.5 installPlanApproval: Automatic name: service-telemetry-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Validate the Service Telemetry Operator and the dependent operators have their phase as Succeeded: USD oc wait --for jsonpath="{.status.phase}"=Succeeded csv --namespace=service-telemetry -l operators.coreos.com/service-telemetry-operator.service-telemetry ; oc get csv --namespace service-telemetry clusterserviceversion.operators.coreos.com/service-telemetry-operator.v1.5.1700688542 condition met NAME DISPLAY VERSION REPLACES PHASE amq7-interconnect-operator.v1.10.17 Red Hat Integration - AMQ Interconnect 1.10.17 amq7-interconnect-operator.v1.10.4 Succeeded observability-operator.v0.0.26 Cluster Observability Operator 0.1.0 Succeeded service-telemetry-operator.v1.5.1700688542 Service Telemetry Operator 1.5.1700688542 Succeeded smart-gateway-operator.v5.0.1700688539 Smart Gateway Operator 5.0.1700688539 Succeeded 3.2. Creating a ServiceTelemetry object in Red Hat OpenShift Container Platform Create a ServiceTelemetry object in Red Hat OpenShift Container Platform to result in the Service Telemetry Operator creating the supporting components for a Service Telemetry Framework (STF) deployment. For more information, see Section 3.2.1, "Primary parameters of the ServiceTelemetry object" . Prerequisites You have deployed STF and the supporting operators. For more information, see Section 3.1, "Deploying Service Telemetry Framework to the Red Hat OpenShift Container Platform environment" . You have installed Cluster Observability Operator to allow storage of metrics. For more information, see Section 3.1.1, "Deploying Cluster Observability Operator" . You have installed cert-manager for Red Hat OpenShift to allow certificate management. For more information, see Section 3.1.2, "Deploying cert-manager for Red Hat OpenShift" . Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. To deploy STF that results in the core components for metrics delivery being configured, create a ServiceTelemetry object: USD oc apply -f - <<EOF apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: alerting: alertmanager: storage: persistent: pvcStorageRequest: 20G strategy: persistent enabled: true backends: metrics: prometheus: enabled: true scrapeInterval: 30s storage: persistent: pvcStorageRequest: 20G retention: 24h strategy: persistent clouds: - metrics: collectors: - bridge: ringBufferCount: 15000 ringBufferSize: 16384 verbose: false collectorType: collectd debugEnabled: false subscriptionAddress: collectd/cloud1-telemetry - bridge: ringBufferCount: 15000 ringBufferSize: 16384 verbose: false collectorType: ceilometer debugEnabled: false subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - bridge: ringBufferCount: 15000 ringBufferSize: 65535 verbose: false collectorType: sensubility debugEnabled: false subscriptionAddress: sensubility/cloud1-telemetry name: cloud1 observabilityStrategy: use_redhat transports: qdr: auth: basic certificates: caCertDuration: 70080h endpointCertDuration: 70080h enabled: true web: enabled: false EOF To override these defaults, add the configuration to the spec parameter. View the STF deployment logs in the Service Telemetry Operator: USD oc logs --selector name=service-telemetry-operator ... --------------------------- Ansible Task Status Event StdOut ----------------- PLAY RECAP ********************************************************************* localhost : ok=90 changed=0 unreachable=0 failed=0 skipped=26 rescued=0 ignored=0 Verification To determine that all workloads are operating correctly, view the pods and the status of each pod. USD oc get pods NAME READY STATUS RESTARTS AGE alertmanager-default-0 3/3 Running 0 123m default-cloud1-ceil-meter-smartgateway-7dfb95fcb6-bs6jl 3/3 Running 0 122m default-cloud1-coll-meter-smartgateway-674d88d8fc-858jk 3/3 Running 0 122m default-cloud1-sens-meter-smartgateway-9b869695d-xcssf 3/3 Running 0 122m default-interconnect-6cbf65d797-hk7l6 1/1 Running 0 123m interconnect-operator-7bb99c5ff4-l6xc2 1/1 Running 0 138m prometheus-default-0 3/3 Running 0 122m service-telemetry-operator-7966cf57f-g4tx4 1/1 Running 0 138m smart-gateway-operator-7d557cb7b7-9ppls 1/1 Running 0 138m 3.2.1. Primary parameters of the ServiceTelemetry object You can set the following primary configuration parameters of the ServiceTelemetry object to configure your STF deployment: alerting backends clouds graphing highAvailability transports The backends parameter Set the value of the backends parameter to allocate the storage back ends for metrics and events, and to enable the Smart Gateways that the clouds parameter defines. For more information, see the section called "The clouds parameter" . You can use Prometheus as the metrics storage back end and Elasticsearch as the events storage back end. The Service Telemetry Operator can create custom resource objects that the Prometheus Operator watches to create a Prometheus workload. You need an external deployment of Elasticsearch to store events. Enabling Prometheus as a storage back end for metrics To enable Prometheus as a storage back end for metrics, you must configure the ServiceTelemetry object. Procedure Edit the ServiceTelemetry object: USD oc edit stf default Set the value of the backends.metrics.prometheus.enabled parameter to true : apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: [...] backends: metrics: prometheus: enabled: true Configuring persistent storage for Prometheus Set the additional parameters in backends.metrics.prometheus.storage.persistent to configure persistent storage options for Prometheus, such as storage class and volume size. Define the back end storage class with the storageClass parameter. If you do not set this parameter, the Service Telemetry Operator uses the default storage class for the Red Hat OpenShift Container Platform cluster. Define the minimum required volume size for the storage request with the pvcStorageRequest parameter. By default, Service Telemetry Operator requests a volume size of 20G (20 Gigabytes). Procedure List the available storage classes: USD oc get storageclasses NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 20h standard (default) kubernetes.io/cinder Delete WaitForFirstConsumer true 20h standard-csi cinder.csi.openstack.org Delete WaitForFirstConsumer true 20h Edit the ServiceTelemetry object: USD oc edit stf default Set the value of the backends.metrics.prometheus.enabled parameter to true and the value of backends.metrics.prometheus.storage.strategy to persistent : apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: [...] backends: metrics: prometheus: enabled: true storage: strategy: persistent persistent: storageClass: standard-csi pvcStorageRequest: 50G Enabling Elasticsearch as a storage back end for events Note versions of STF managed Elasticsearch objects for the community supported Elastic Cloud on Kubernetes Operator (ECK). Elasticsearch management functionality is deprecated in STF 1.5.3. You can still forward to an existing Elasticsearch instance that you deploy and manage with ECK, but you cannot manage the creation of Elasticsearch objects. When you upgrade your STF deployment, existing Elasticsearch objects and deployments remain, but are no longer managed by STF. For more information about using Elasticsearch with STF, see the Red Hat Knowledge Base article Using Service Telemetry Framework with Elasticsearch . To enable events forwarding to Elasticsearch as a storage back end, you must configure the ServiceTelemetry object. Procedure Edit the ServiceTelemetry object: USD oc edit stf default Set the value of the backends.events.elasticsearch.enabled parameter to true and configure the hostUrl with the relevant Elasticsearch instance: apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: [...] backends: events: elasticsearch: enabled: true forwarding: hostUrl: https://external-elastic-http.domain:9200 tlsServerName: "" tlsSecretName: elasticsearch-es-cert userSecretName: elasticsearch-es-elastic-user useBasicAuth: true useTls: true Create the secret named in the userSecretName parameter to store the basic auth credentials USD oc create secret generic elasticsearch-es-elastic-user --from-literal=elastic='<PASSWORD>' Copy the CA certificate into a file named EXTERNAL-ES-CA.pem , then create the secret named in the tlsSecretName parameter to make it available to STF USD cat EXTERNAL-ES-CA.pem -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- USD oc create secret generic elasticsearch-es-cert --from-file=ca.crt=EXTERNAL-ES-CA.pem The clouds parameter Configure the clouds parameter to define which Smart Gateway objects deploy and provide the interface for monitored cloud environments to connect to an instance of STF. If a supporting back end is available, metrics and events Smart Gateways for the default cloud configuration are created. By default, the Service Telemetry Operator creates Smart Gateways for cloud1 . You can create a list of cloud objects to control which Smart Gateways are created for the defined clouds. Each cloud consists of data types and collectors. Data types are metrics or events . Each data type consists of a list of collectors, the message bus subscription address, and a parameter to enable debugging. Available collectors for metrics are collectd , ceilometer , and sensubility . Available collectors for events are collectd and ceilometer . Ensure that the subscription address for each of these collectors is unique for every cloud, data type, and collector combination. The default cloud1 configuration is represented by the following ServiceTelemetry object, which provides subscriptions and data storage of metrics and events for collectd, Ceilometer, and Sensubility data collectors for a particular cloud instance: apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: clouds: - name: cloud1 metrics: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-telemetry - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - collectorType: sensubility subscriptionAddress: sensubility/cloud1-telemetry debugEnabled: false events: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-notify - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-event.sample Each item of the clouds parameter represents a cloud instance. A cloud instance consists of three top-level parameters: name , metrics , and events . The metrics and events parameters represent the corresponding back end for storage of that data type. The collectors parameter specifies a list of objects made up of two required parameters, collectorType and subscriptionAddress , and these represent an instance of the Smart Gateway. The collectorType parameter specifies data collected by either collectd, Ceilometer, or Sensubility. The subscriptionAddress parameter provides the AMQ Interconnect address to which a Smart Gateway subscribes. You can use the optional Boolean parameter debugEnabled within the collectors parameter to enable additional console debugging in the running Smart Gateway pod. Additional resources For more information about deleting default Smart Gateways, see Section 4.3.3, "Deleting the default Smart Gateways" . For more information about how to configure multiple clouds, see Section 4.3, "Configuring multiple clouds" . The alerting parameter Set the alerting parameter to create an Alertmanager instance and a storage back end. By default, alerting is enabled. For more information, see Section 6.3, "Alerts in Service Telemetry Framework" . The graphing parameter Set the graphing parameter to create a Grafana instance. By default, graphing is disabled. For more information, see Section 6.1, "Dashboards in Service Telemetry Framework" . The highAvailability parameter Warning STF high availability (HA) mode is deprecated and is not supported in production environments. Red Hat OpenShift Container Platform is a highly-available platform, and you can cause issues and complicate debugging in STF if you enable HA mode. Set the highAvailability parameter to instantiate multiple copies of STF components to reduce recovery time of components that fail or are rescheduled. By default, highAvailability is disabled. For more information, see Section 6.6, "High availability" . The transports parameter Set the transports parameter to enable the message bus for a STF deployment. The only transport currently supported is AMQ Interconnect. By default, the qdr transport is enabled. 3.3. Accessing user interfaces for STF components In Red Hat OpenShift Container Platform, applications are exposed to the external network through a route. For more information about routes, see Configuring ingress cluster traffic . In Service Telemetry Framework (STF), HTTPS routes are exposed for each service that has a web-based interface and protected by Red Hat OpenShift Container Platform role-based access control (RBAC). You need the following permissions to access the corresponding component UI's: {"namespace":"service-telemetry", "resource":"grafana", "group":"grafana.integreatly.org", "verb":"get"} {"namespace":"service-telemetry", "resource":"prometheus", "group":"monitoring.rhobs", "verb":"get"} {"namespace":"service-telemetry", "resource":"alertmanager", "group":"monitoring.rhobs", "verb":"get"} For more information about RBAC, see Using RBAC to define and apply permissions . Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry List the available web UI routes in the service-telemetry project: USD oc get routes | grep web default-alertmanager-proxy default-alertmanager-proxy-service-telemetry.apps.infra.watch default-alertmanager-proxy web reencrypt/Redirect None default-prometheus-proxy default-prometheus-proxy-service-telemetry.apps.infra.watch default-prometheus-proxy web reencrypt/Redirect None In a web browser, navigate to https://<route_address> to access the web interface for the corresponding service. 3.4. Configuring an alternate observability strategy To skip the deployment of storage, visualization, and alerting backends, add observabilityStrategy: none to the ServiceTelemetry spec. In this mode, you only deploy AMQ Interconnect routers and Smart Gateways, and you must configure an external Prometheus-compatible system to collect metrics from the STF Smart Gateways, and an external Elasticsearch to receive the forwarded events. Procedure Create a ServiceTelemetry object with the property observabilityStrategy: none in the spec parameter. The manifest shows results in a default deployment of STF that is suitable for receiving telemetry from a single cloud with all metrics collector types. USD oc apply -f - <<EOF apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: observabilityStrategy: none EOF Delete the remaining objects that are managed by community operators USD for o in alertmanagers.monitoring.rhobs/default prometheuses.monitoring.rhobs/default elasticsearch/elasticsearch grafana/default-grafana; do oc delete USDo; done To verify that all workloads are operating correctly, view the pods and the status of each pod: USD oc get pods NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6f8547df6c-p2db5 3/3 Running 0 132m default-cloud1-ceil-meter-smartgateway-59c845d65b-gzhcs 3/3 Running 0 132m default-cloud1-coll-event-smartgateway-bf859f8d77-tzb66 3/3 Running 0 132m default-cloud1-coll-meter-smartgateway-75bbd948b9-d5phm 3/3 Running 0 132m default-cloud1-sens-meter-smartgateway-7fdbb57b6d-dh2g9 3/3 Running 0 132m default-interconnect-668d5bbcd6-57b2l 1/1 Running 0 132m interconnect-operator-b8f5bb647-tlp5t 1/1 Running 0 47h service-telemetry-operator-566b9dd695-wkvjq 1/1 Running 0 156m smart-gateway-operator-58d77dcf7-6xsq7 1/1 Running 0 47h Additional resources For more information about configuring additional clouds or to change the set of supported collectors, see Section 4.3.2, "Deploying Smart Gateways" . To migrate an existing STF deployment to use_redhat , see the Red Hat Knowledge Base article Migrating Service Telemetry Framework to fully supported operators . | [
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-observability-operator namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: cluster-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc wait --for jsonpath=\"{.status.phase}\"=Succeeded csv --namespace=openshift-operators -l operators.coreos.com/cluster-observability-operator.openshift-operators clusterserviceversion.operators.coreos.com/observability-operator.v0.0.26 condition met",
"oc get sub --all-namespaces -o json | jq '.items[] | select(.metadata.name | match(\"cert-manager\")) | .metadata.name'",
"oc create -f - <<EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: cert-manager-operator spec: finalizers: - kubernetes EOF",
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - cert-manager-operator upgradeStrategy: Default EOF",
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator labels: operators.coreos.com/openshift-cert-manager-operator.cert-manager-operator: \"\" spec: channel: stable-v1 installPlanApproval: Automatic name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"wait --for jsonpath=\"{.status.phase}\"=Succeeded csv --namespace=cert-manager-operator --selector=operators.coreos.com/openshift-cert-manager-operator.cert-manager-operator clusterserviceversion.operators.coreos.com/cert-manager-operator.v1.12.1 condition met",
"oc new-project service-telemetry",
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: service-telemetry-operator-group namespace: service-telemetry spec: targetNamespaces: - service-telemetry EOF",
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: service-telemetry-operator namespace: service-telemetry spec: channel: stable-1.5 installPlanApproval: Automatic name: service-telemetry-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc wait --for jsonpath=\"{.status.phase}\"=Succeeded csv --namespace=service-telemetry -l operators.coreos.com/service-telemetry-operator.service-telemetry ; oc get csv --namespace service-telemetry clusterserviceversion.operators.coreos.com/service-telemetry-operator.v1.5.1700688542 condition met NAME DISPLAY VERSION REPLACES PHASE amq7-interconnect-operator.v1.10.17 Red Hat Integration - AMQ Interconnect 1.10.17 amq7-interconnect-operator.v1.10.4 Succeeded observability-operator.v0.0.26 Cluster Observability Operator 0.1.0 Succeeded service-telemetry-operator.v1.5.1700688542 Service Telemetry Operator 1.5.1700688542 Succeeded smart-gateway-operator.v5.0.1700688539 Smart Gateway Operator 5.0.1700688539 Succeeded",
"oc apply -f - <<EOF apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: alerting: alertmanager: storage: persistent: pvcStorageRequest: 20G strategy: persistent enabled: true backends: metrics: prometheus: enabled: true scrapeInterval: 30s storage: persistent: pvcStorageRequest: 20G retention: 24h strategy: persistent clouds: - metrics: collectors: - bridge: ringBufferCount: 15000 ringBufferSize: 16384 verbose: false collectorType: collectd debugEnabled: false subscriptionAddress: collectd/cloud1-telemetry - bridge: ringBufferCount: 15000 ringBufferSize: 16384 verbose: false collectorType: ceilometer debugEnabled: false subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - bridge: ringBufferCount: 15000 ringBufferSize: 65535 verbose: false collectorType: sensubility debugEnabled: false subscriptionAddress: sensubility/cloud1-telemetry name: cloud1 observabilityStrategy: use_redhat transports: qdr: auth: basic certificates: caCertDuration: 70080h endpointCertDuration: 70080h enabled: true web: enabled: false EOF",
"oc logs --selector name=service-telemetry-operator --------------------------- Ansible Task Status Event StdOut ----------------- PLAY RECAP ********************************************************************* localhost : ok=90 changed=0 unreachable=0 failed=0 skipped=26 rescued=0 ignored=0",
"oc get pods NAME READY STATUS RESTARTS AGE alertmanager-default-0 3/3 Running 0 123m default-cloud1-ceil-meter-smartgateway-7dfb95fcb6-bs6jl 3/3 Running 0 122m default-cloud1-coll-meter-smartgateway-674d88d8fc-858jk 3/3 Running 0 122m default-cloud1-sens-meter-smartgateway-9b869695d-xcssf 3/3 Running 0 122m default-interconnect-6cbf65d797-hk7l6 1/1 Running 0 123m interconnect-operator-7bb99c5ff4-l6xc2 1/1 Running 0 138m prometheus-default-0 3/3 Running 0 122m service-telemetry-operator-7966cf57f-g4tx4 1/1 Running 0 138m smart-gateway-operator-7d557cb7b7-9ppls 1/1 Running 0 138m",
"oc edit stf default",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: [...] backends: metrics: prometheus: enabled: true",
"oc get storageclasses NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 20h standard (default) kubernetes.io/cinder Delete WaitForFirstConsumer true 20h standard-csi cinder.csi.openstack.org Delete WaitForFirstConsumer true 20h",
"oc edit stf default",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: [...] backends: metrics: prometheus: enabled: true storage: strategy: persistent persistent: storageClass: standard-csi pvcStorageRequest: 50G",
"oc edit stf default",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: [...] backends: events: elasticsearch: enabled: true forwarding: hostUrl: https://external-elastic-http.domain:9200 tlsServerName: \"\" tlsSecretName: elasticsearch-es-cert userSecretName: elasticsearch-es-elastic-user useBasicAuth: true useTls: true",
"oc create secret generic elasticsearch-es-elastic-user --from-literal=elastic='<PASSWORD>'",
"cat EXTERNAL-ES-CA.pem -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- oc create secret generic elasticsearch-es-cert --from-file=ca.crt=EXTERNAL-ES-CA.pem",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: clouds: - name: cloud1 metrics: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-telemetry - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - collectorType: sensubility subscriptionAddress: sensubility/cloud1-telemetry debugEnabled: false events: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-notify - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-event.sample",
"{\"namespace\":\"service-telemetry\", \"resource\":\"grafana\", \"group\":\"grafana.integreatly.org\", \"verb\":\"get\"} {\"namespace\":\"service-telemetry\", \"resource\":\"prometheus\", \"group\":\"monitoring.rhobs\", \"verb\":\"get\"} {\"namespace\":\"service-telemetry\", \"resource\":\"alertmanager\", \"group\":\"monitoring.rhobs\", \"verb\":\"get\"}",
"oc project service-telemetry",
"oc get routes | grep web default-alertmanager-proxy default-alertmanager-proxy-service-telemetry.apps.infra.watch default-alertmanager-proxy web reencrypt/Redirect None default-prometheus-proxy default-prometheus-proxy-service-telemetry.apps.infra.watch default-prometheus-proxy web reencrypt/Redirect None",
"oc apply -f - <<EOF apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: name: default namespace: service-telemetry spec: observabilityStrategy: none EOF",
"for o in alertmanagers.monitoring.rhobs/default prometheuses.monitoring.rhobs/default elasticsearch/elasticsearch grafana/default-grafana; do oc delete USDo; done",
"oc get pods NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6f8547df6c-p2db5 3/3 Running 0 132m default-cloud1-ceil-meter-smartgateway-59c845d65b-gzhcs 3/3 Running 0 132m default-cloud1-coll-event-smartgateway-bf859f8d77-tzb66 3/3 Running 0 132m default-cloud1-coll-meter-smartgateway-75bbd948b9-d5phm 3/3 Running 0 132m default-cloud1-sens-meter-smartgateway-7fdbb57b6d-dh2g9 3/3 Running 0 132m default-interconnect-668d5bbcd6-57b2l 1/1 Running 0 132m interconnect-operator-b8f5bb647-tlp5t 1/1 Running 0 47h service-telemetry-operator-566b9dd695-wkvjq 1/1 Running 0 156m smart-gateway-operator-58d77dcf7-6xsq7 1/1 Running 0 47h"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-installing-the-core-components-of-stf_assembly |
Chapter 3. Placement Groups | Chapter 3. Placement Groups Placement Groups (PGs) are invisible to Ceph clients, but they play an important role in Ceph Storage Clusters. A Ceph Storage Cluster might require many thousands of OSDs to reach an exabyte level of storage capacity. Ceph clients store objects in pools, which are a logical subset of the overall cluster. The number of objects stored in a pool might easily run into the millions and beyond. A system with millions of objects or more cannot realistically track placement on a per-object basis and still perform well. Ceph assigns objects to placement groups, and placement groups to OSDs to make re-balancing dynamic and efficient. All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections. -- David Wheeler 3.1. About placement groups Tracking object placement on a per-object basis within a pool is computationally expensive at scale. To facilitate high performance at scale, Ceph subdivides a pool into placement groups, assigns each individual object to a placement group, and assigns the placement group to a primary OSD. If an OSD fails or the cluster re-balances, Ceph can move or replicate an entire placement group- that is, all of the objects in the placement groups- without having to address each object individually. This allows a Ceph cluster to re-balance or recover efficiently. When CRUSH assigns a placement group to an OSD, it calculates a series of OSDs- the first being the primary. The osd_pool_default_size setting minus 1 for replicated pools, and the number of coding chunks M for erasure-coded pools determine the number of OSDs storing a placement group that can fail without losing data permanently. Primary OSDs use CRUSH to identify the secondary OSDs and copy the placement group's contents to the secondary OSDs. For example, if CRUSH assigns an object to a placement group, and the placement group is assigned to OSD 5 as the primary OSD, if CRUSH calculates that OSD 1 and OSD 8 are secondary OSDs for the placement group, the primary OSD 5 will copy the data to OSDs 1 and 8. By copying data on behalf of clients, Ceph simplifies the client interface and reduces the client workload. The same process allows the Ceph cluster to recover and rebalance dynamically. When the primary OSD fails and gets marked out of the cluster, CRUSH assigns the placement group to another OSD, which receives copies of objects in the placement group. Another OSD in the Up Set will assume the role of the primary OSD. When you increase the number of object replicas or coding chunks, CRUSH will assign each placement group to additional OSDs as required. Note PGs do not own OSDs. CRUSH assigns many placement groups to each OSD pseudo-randomly to ensure that data gets distributed evenly across the cluster. 3.2. Placement group states When you check the storage cluster's status with the ceph -s or ceph -w commands, Ceph reports on the status of the placement groups (PGs). A PG has one or more states. The optimum state for PGs in the PG map is an active + clean state. activating The PG is peered, but not yet active. active Ceph processes requests to the PG. backfill_toofull A backfill operation is waiting because the destination OSD is over the backfillfull ratio. backfill_unfound Backfill stopped due to unfound objects. backfill_wait The PG is waiting in line to start backfill. backfilling Ceph is scanning and synchronizing the entire contents of a PG instead of inferring what contents need to be synchronized from the logs of recent operations. Backfill is a special case of recovery. clean Ceph replicated all objects in the PG accurately. creating Ceph is still creating the PG. deep Ceph is checking the PG data against stored checksums. degraded Ceph has not replicated some objects in the PG accurately yet. down A replica with necessary data is down, so the PG is offline. A PG with less than min_size replicas is marked as down. Use ceph health detail to understand the backing OSD state. forced_backfill High backfill priority of that PG is enforced by user. forced_recovery High recovery priority of that PG is enforced by user. incomplete Ceph detects that a PG is missing information about writes that might have occurred, or does not have any healthy copies. If you see this state, try to start any failed OSDs that might contain the needed information. In the case of an erasure coded pool, temporarily reducing min_size might allow recovery. inconsistent Ceph detects inconsistencies in one or more replicas of an object in the PG, such as objects are the wrong size, objects are missing from one replica after recovery finished. peering The PG is undergoing the peering process. A peering process should clear off without much delay, but if it stays and the number of PGs in a peering state does not reduce in number, the peering might be stuck. peered The PG has peered, but cannot serve client IO due to not having enough copies to reach the pool's configured min_size parameter. Recovery might occur in this state, so the PG might heal up to min_size eventually. recovering Ceph is migrating or synchronizing objects and their replicas. recovery_toofull A recovery operation is waiting because the destination OSD is over its full ratio. recovery_unfound Recovery stopped due to unfound objects. recovery_wait The PG is waiting in line to start recovery. remapped The PG is temporarily mapped to a different set of OSDs from what CRUSH specified. repair Ceph is checking the PG and repairing any inconsistencies it finds, if possible. replay The PG is waiting for clients to replay operations after an OSD crashed. snaptrim Trimming snaps. snaptrim_error Error stopped trimming snaps. snaptrim_wait Queued to trim snaps. scrubbing Ceph is checking the PG metadata for inconsistencies. splitting Ceph is splitting the PG into multiple PGs. stale The PG is in an unknown state; the monitors have not received an update for it since the PG mapping changed. undersized The PG has fewer copies than the configured pool replication level. unknown The ceph-mgr has not yet received any information about the PG's state from an OSD since Ceph Manager started up. Additional resources See the knowledge base What are the possible Placement Group states in an Ceph cluster for more information. 3.3. Placement group tradeoffs Data durability and data distribution among all OSDs call for more placement groups but their number should be reduced to the minimum required for maximum performance to conserve CPU and memory resources. 3.3.1. Data durability Ceph strives to prevent the permanent loss of data. However, after an OSD fails, the risk of permanent data loss increases until the data it had is fully recovered. Permanent data loss, though rare, is still possible. The following scenario describes how Ceph could permanently lose data in a single placement group with three copies of the data: An OSD fails and all copies of the object it contains are lost. For all objects within a placement group stored on the OSD, the number of replicas suddenly drops from three to two. Ceph starts recovery for each placement group stored on the failed OSD by choosing a new OSD to re-create the third copy of all objects for each placement group. The second OSD containing a copy of the same placement group fails before the new OSD is fully populated with the third copy. Some objects will then only have one surviving copy. Ceph picks yet another OSD and keeps copying objects to restore the desired number of copies. The third OSD containing a copy of the same placement group fails before recovery is complete. If this OSD contained the only remaining copy of an object, the object is lost permanently. Hardware failure isn't an exception, but an expectation. To prevent the foregoing scenario, ideally the recovery process should be as fast as reasonably possible. The size of your cluster, your hardware configuration and the number of placement groups play an important role in total recovery time. Small clusters don't recover as quickly. In a cluster containing 10 OSDs with 512 placement groups in a three replica pool, CRUSH will give each placement group three OSDs. Each OSD will end up hosting (512 * 3) / 10 = ~150 placement groups. When the first OSD fails, the cluster will start recovery for all 150 placement groups simultaneously. It is likely that Ceph stored the remaining 150 placement groups randomly across the 9 remaining OSDs. Therefore, each remaining OSD is likely to send copies of objects to all other OSDs and also receive some new objects, because the remaining OSDs become responsible for some of the 150 placement groups now assigned to them. The total recovery time depends upon the hardware supporting the pool. For example, in a 10 OSD cluster, if a host contains one OSD with a 1 TB SSD, and a 10 GB/s switch connects each of the 10 hosts, the recovery time will take M minutes. By contrast, if a host contains two SATA OSDs and a 1 GB/s switch connects the five hosts, recovery will take substantially longer. Interestingly, in a cluster of this size, the number of placement groups has almost no influence on data durability. The placement group count could be 128 or 8192 and the recovery would not be slower or faster. However, growing the same Ceph cluster to 20 OSDs instead of 10 OSDs is likely to speed up recovery and therefore improve data durability significantly. Why? Each OSD now participates in only 75 placement groups instead of 150. The 20 OSD cluster will still require all 19 remaining OSDs to perform the same amount of copy operations in order to recover. In the 10 OSD cluster, each OSDs had to copy approximately 100 GB. In the 20 OSD cluster each OSD only has to copy 50 GB each. If the network was the bottleneck, recovery will happen twice as fast. In other words, recovery time decreases as the number of OSDs increases. In large clusters, PG count is important! If the exemplary cluster grows to 40 OSDs, each OSD will only host 35 placement groups. If an OSD dies, recovery time will decrease unless another bottleneck precludes improvement. However, if this cluster grows to 200 OSDs, each OSD will only host approximately 7 placement groups. If an OSD dies, recovery will happen between at most of 21 (7 * 3) OSDs in these placement groups: recovery will take longer than when there were 40 OSDs, meaning the number of placement groups should be increased! Important No matter how short the recovery time, there is a chance for another OSD storing the placement group to fail while recovery is in progress. In the 10 OSD cluster described above, if any OSD fails, then approximately 8 placement groups (that is 75 pgs / 9 osds being recovered) will only have one surviving copy. And if any of the 8 remaining OSDs fail, the last objects of one placement group are likely to be lost (that is 8 pgs / 8 osds with only one remaining copy being recovered). This is why starting with a somewhat larger cluster is preferred (for example, 50 OSDs). When the size of the cluster grows to 20 OSDs, the number of placement groups damaged by the loss of three OSDs drops. The second OSD lost will degrade approximately 2 (that is 35 pgs / 19 osds being recovered) instead of 8 and the third OSD lost will only lose data if it is one of the two OSDs containing the surviving copy. In other words, if the probability of losing one OSD is 0.0001% during the recovery time frame, it goes from 8 * 0.0001% in the cluster with 10 OSDs to 2 * 0.0001% in the cluster with 20 OSDs. Having 512 or 4096 placement groups is roughly equivalent in a cluster with less than 50 OSDs as far as data durability is concerned. Tip In a nutshell, more OSDs means faster recovery and a lower risk of cascading failures leading to the permanent loss of a placement group and its objects. When you add an OSD to the cluster, it might take a long time to populate the new OSD with placement groups and objects. However there is no degradation of any object and adding the OSD has no impact on data durability. 3.3.2. Data distribution Ceph seeks to avoid hot spots- that is, some OSDs receive substantially more traffic than other OSDs. Ideally, CRUSH assigns objects to placement groups evenly so that when the placement groups get assigned to OSDs (also pseudo randomly), the primary OSDs store objects such that they are evenly distributed across the cluster and hot spots and network over-subscription problems cannot develop because of data distribution. Since CRUSH computes the placement group for each object, but does not actually know how much data is stored in each OSD within this placement group, the ratio between the number of placement groups and the number of OSDs might influence the distribution of the data significantly. For instance, if there was only one placement group with ten OSDs in a three replica pool, Ceph would only use three OSDs to store data because CRUSH would have no other choice. When more placement groups are available, CRUSH is more likely to evenly spread objects across OSDs. CRUSH also evenly assigns placement groups to OSDs. As long as there are one or two orders of magnitude more placement groups than OSDs, the distribution should be even. For instance, 256 placement groups for 3 OSDs, 512 or 1024 placement groups for 10 OSDs, and so forth. The ratio between OSDs and placement groups usually solves the problem of uneven data distribution for Ceph clients that implement advanced features like object striping. For example, a 4 TB block device might get sharded up into 4 MB objects. The ratio between OSDs and placement groups does not address uneven data distribution in other cases, because CRUSH does not take object size into account. Using the librados interface to store some relatively small objects and some very large objects can lead to uneven data distribution. For example, one million 4K objects totaling 4 GB are evenly spread among 1000 placement groups on 10 OSDs. They will use 4 GB / 10 = 400 MB on each OSD. If one 400 MB object is added to the pool, the three OSDs supporting the placement group in which the object has been placed will be filled with 400 MB + 400 MB = 800 MB while the seven others will remain occupied with only 400 MB. 3.3.3. Resource usage For each placement group, OSDs and Ceph monitors need memory, network and CPU at all times, and even more during recovery. Sharing this overhead by clustering objects within a placement group is one of the main reasons placement groups exist. Minimizing the number of placement groups saves significant amounts of resources. 3.4. Placement group count The number of placement groups in a pool plays a significant role in how a cluster peers, distributes data and rebalances. Small clusters don't see as many performance improvements compared to large clusters by increasing the number of placement groups. However, clusters that have many pools accessing the same OSDs might need to carefully consider PG count so that Ceph OSDs use resources efficiently. Tip Red Hat recommends 100 to 200 PGs per OSD. 3.4.1. Placement group calculator The placement group (PG) calculator calculates the number of placement groups for you and addresses specific use cases. The PG calculator is especially helpful when using Ceph clients like the Ceph Object Gateway where there are many pools typically using the same rule (CRUSH hierarchy). You might still calculate PGs manually using the guidelines in Placement group count for small clusters and Calculating placement group count . However, the PG calculator is the preferred method of calculating PGs. See Ceph Placement Groups (PGs) per Pool Calculator on the Red Hat Customer Portal for details. 3.4.2. Configuring default placement group count When you create a pool, you also create a number of placement groups for the pool. If you don't specify the number of placement groups, Ceph will use the default value of 8 , which is unacceptably low. You can increase the number of placement groups for a pool, but we recommend setting reasonable default values too. You need to set both the number of placement groups (total), and the number of placement groups used for objects (used in PG splitting). They should be equal. 3.4.3. Placement group count for small clusters Small clusters don't benefit from large numbers of placement groups. As the number of OSDs increase, choosing the right value for pg_num and pgp_num becomes more important because it has a significant influence on the behavior of the cluster as well as the durability of the data when something goes wrong (that is the probability that a catastrophic event leads to data loss). It is important to use the PG calculator with small clusters. 3.4.4. Calculating placement group count If you have more than 50 OSDs, we recommend approximately 50-100 placement groups per OSD to balance out resource usage, data durability and distribution. If you have less than 50 OSDs, choosing among the PG Count for Small Clusters is ideal. For a single pool of objects, you can use the following formula to get a baseline: Where pool size is either the number of replicas for replicated pools or the K+M sum for erasure coded pools (as returned by ceph osd erasure-code-profile get ). You should then check if the result makes sense with the way you designed your Ceph cluster to maximize data durability, data distribution and minimize resource usage. The result should be rounded up to the nearest power of two. Rounding up is optional, but recommended for CRUSH to evenly balance the number of objects among placement groups. For a cluster with 200 OSDs and a pool size of 3 replicas, you would estimate your number of PGs as follows: With 8192 placement groups distributed across 200 OSDs, that evaluates to approximately 41 placement groups per OSD. You also need to consider the number of pools you are likely to use in your cluster, since each pool will create placement groups too. Ensure that you have a reasonable maximum placement group count . 3.4.5. Maximum placement group count When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per pool with the number of placement groups per OSD so that you arrive at a reasonable total number of placement groups. The aim is to achieve reasonably low variance per OSD without taxing system resources or making the peering process too slow. In an exemplary Ceph Storage Cluster consisting of 10 pools, each pool with 512 placement groups on ten OSDs, there are a total of 5,120 placement groups spread over ten OSDs, or 512 placement groups per OSD. That might not use too many resources depending on your hardware configuration. By contrast, if you create 1,000 pools with 512 placement groups each, the OSDs will handle ~50,000 placement groups each and it would require significantly more resources. Operating with too many placement groups per OSD can significantly reduce performance, especially during rebalancing or recovery. The Ceph Storage Cluster has a default maximum value of 300 placement groups per OSD. You can set a different maximum value in your Ceph configuration file. Tip Ceph Object Gateways deploy with 10-15 pools, so you might consider using less than 100 PGs per OSD to arrive at a reasonable maximum number. 3.5. Auto-scaling placement groups The number of placement groups (PGs) in a pool plays a significant role in how a cluster peers, distributes data, and rebalances. Auto-scaling the number of PGs can make managing the cluster easier. The pg-autoscaling command provides recommendations for scaling PGs, or automatically scales PGs based on how the cluster is being used. To learn more about how auto-scaling works, see Section 3.5.1, "Placement group auto-scaling" . To enable, or disable auto-scaling, see Section 3.5.3, "Setting placement group auto-scaling modes" . To view placement group scaling recommendations, see Section 3.5.4, "Viewing placement group scaling recommendations" . To set placement group auto-scaling, see Section 3.5.5, "Setting placement group auto-scaling" . To update the autoscaler globally, see Section 3.5.6, "Updating noautoscale flag" To set target pool size see, Section 3.6, "Specifying target pool size" . 3.5.1. Placement group auto-scaling How the auto-scaler works The auto-scaler analyzes pools and adjusts on a per-subtree basis. Because each pool can map to a different CRUSH rule, and each rule can distribute data across different devices, Ceph considers utilization of each subtree of the hierarchy independently. For example, a pool that maps to OSDs of class ssd , and a pool that maps to OSDs of class hdd , will each have optimal PG counts that depend on the number of those respective device types. 3.5.2. Placement group splitting and merging Splitting Red Hat Ceph Storage can split existing placement groups (PGs) into smaller PGs, which increases the total number of PGs for a given pool. Splitting existing placement groups (PGs) allows a small Red Hat Ceph Storage cluster to scale over time as storage requirements increase. The PG auto-scaling feature can increase the pg_num value, which causes the existing PGs to split as the storage cluster expands. If the PG auto-scaling feature is disabled, then you can manually increase the pg_num value, which triggers the PG split process to begin. For example, increasing the pg_num value from 4 to 16 , will split into four pieces. Increasing the pg_num value will also increase the pgp_num value, but the pgp_num value increases at a gradual rate. This gradual increase is done to minimize the impact to a storage cluster's performance and to a client's workload, because migrating object data adds a significant load to the system. By default, Ceph queues and moves no more than 5% of the object data that is in a "misplaced" state. This default percentage can be adjusted with the target_max_misplaced_ratio option. Merging Red Hat Ceph Storage can also merge two existing PGs into a larger PG, which decreases the total number of PGs. Merging two PGs together can be useful, especially when the relative amount of objects in a pool decreases over time, or when the initial number of PGs chosen was too large. While merging PGs can be useful, it is also a complex and delicate process. When doing a merge, pausing I/O to the PG occurs, and only one PG is merged at a time to minimize the impact to a storage cluster's performance. Ceph works slowly on merging the object data until the new pg_num value is reached. 3.5.3. Setting placement group auto-scaling modes Each pool in the Red Hat Ceph Storage cluster has a pg_autoscale_mode property for PGs that you can set to off , on , or warn . off : Disables auto-scaling for the pool. It is up to the administrator to choose an appropriate PG number for each pool. Refer to the Placement group count section for more information. on : Enables automated adjustments of the PG count for the given pool. warn : Raises health alerts when the PG count needs adjustment. Note In Red Hat Ceph Storage 5 and later releases, pg_autoscale_mode is on by default. Upgraded storage clusters retain the existing pg_autoscale_mode setting. The pg_auto_scale mode is on for the newly created pools. PG count is automatically adjusted, and ceph status might display a recovering state during PG count adjustment. The autoscaler uses the bulk flag to determine which pool should start with a full complement of PGs and only scales down when the usage ratio across the pool is not even. However, if the pool does not have the bulk flag, the pool starts with minimal PGs and only when there is more usage in the pool. Note The autoscaler identifies any overlapping roots and prevents the pools with such roots from scaling because overlapping roots can cause problems with the scaling process. Procedure Enable auto-scaling on an existing pool: Syntax Example Enable auto-scaling on a newly created pool: Syntax Example Create a pool with the bulk flag: Syntax Example Set or unset the bulk flag for an existing pool: Important The values must be written as true , false , 1 , or 0 . 1 is equivalent to true and 0 is equivalent to false . If written with different capitalization, or with other content, an error is emitted. The following is an example of the command written with the wrong syntax: Syntax Example Get the bulk flag of an existing pool: Syntax Example 3.5.4. Viewing placement group scaling recommendations You can view the pool, it's relative utilization and any suggested changes to the PG count in the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster Root-level access to all the nodes. Procedure You can view each pool, its relative utilization, and any suggested changes to the PG count using: Output will look similar to the following: SIZE is the amount of data stored in the pool. TARGET SIZE , if present, is the amount of data the administrator has specified they expect to eventually be stored in this pool. The system uses the larger of the two values for its calculation. RATE is the multiplier for the pool that determines how much raw storage capacity the pool uses. For example, a 3 replica pool has a ratio of 3.0 , while a k=4,m=2 erasure coded pool has a ratio of 1.5 . RAW CAPACITY is the total amount of raw storage capacity on the OSDs that are responsible for storing the pool's data. RATIO is the ratio of the total capacity that the pool is consuming, that is, ratio = size * rate / raw capacity. TARGET RATIO , if present, is the ratio of storage the administrator has specified that they expect the pool to consume relative to other pools with target ratios set. If both target size bytes and ratio are specified, the ratio takes precedence. The default value of TARGET RATIO is 0 unless it was specified while creating the pool. The more the --target_ratio you give in a pool, the larger the PGs you are expecting the pool to have. EFFECTIVE RATIO , is the target ratio after adjusting in two ways: 1. subtracting any capacity expected to be used by pools with target size set. 2. normalizing the target ratios among pools with target ratio set so they collectively target the rest of the space. For example, 4 pools with target ratio 1.0 would have an effective ratio of 0.25. The system uses the larger of the actual ratio and the effective ratio for its calculation. BIAS , is used as a multiplier to manually adjust a pool's PG based on prior information about how much PGs a specific pool is expected to have. By default, the value if 1.0 unless it was specified when creating a pool. The more --bias you give in a pool, the larger the PGs you are expecting the pool to have. PG_NUM is the current number of PGs for the pool, or the current number of PGs that the pool is working towards, if a pg_num change is in progress. NEW PG_NUM , if present, is the suggested number of PGs ( pg_num ). It is always a power of 2, and is only present if the suggested value varies from the current value by more than a factor of 3. AUTOSCALE , is the pool pg_autoscale_mode , and is either on , off , or warn . BULK , is used to determine which pool should start out with a full complement of PGs. BULK only scales down when the usage ratio cross the pool is not even. If the pool does not have this flag the pool starts out with a minimal amount of PGs and only used when there is more usage in the pool. The BULK values are true , false , 1 , or 0 , where 1 is equivalent to true and 0 is equivalent to false . The default value is false . Set the BULK value either during or after pool creation. For more information about using the bulk flag, see Creating a pool and Setting placement group auto-scaling modes . 3.5.5. Setting placement group auto-scaling Allowing the cluster to automatically scale PGs based on cluster usage is the simplest approach to scaling PGs. Red Hat Ceph Storage takes the total available storage and the target number of PGs for the whole system, compares how much data is stored in each pool, and apportions the PGs accordingly. The command only makes changes to a pool whose current number of PGs ( pg_num ) is more than three times off from the calculated or suggested PG number. The target number of PGs per OSD is based on the mon_target_pg_per_osd configurable. The default value is set to 100 . Procedure To adjust mon_target_pg_per_osd : Syntax For example: 3.5.6. Updating noautoscale flag If you want to enable or disable the autoscaler for all the pools at the same time, you can use the noautoscale global flag. This global flag is useful during upgradation of the storage cluster when some OSDs are bounced or when the cluster is under maintenance. You can set the flag before any activity and unset it once the activity is complete. By default, the noautoscale flag is set to off . When this flag is set, then all the pools have pg_autoscale_mode as off and all the pools have the autoscaler disabled. Prerequisites A running Red Hat Ceph Storage cluster Root-level access to all the nodes. Procedure Get the value of the noautoscale flag: Example Set the noautoscale flag before any activity: Example Unset the noautoscale flag on completion of the activity: Example 3.6. Specifying target pool size A newly created pool consumes a small fraction of the total cluster capacity and appears to the system that it will need a small number of PGs. However, in most cases, cluster administrators know which pools are expected to consume most of the system capacity over time. If you provide this information, known as the target size to Red Hat Ceph Storage, such pools can use a more appropriate number of PGs ( pg_num ) from the beginning. This approach prevents subsequent changes in pg_num and the overhead associated with moving data around when making those adjustments. You can specify target size of a pool in these ways: Section 3.6.1, "Specifying target size using the absolute size of the pool" Section 3.6.2, "Specifying target size using the total cluster capacity" 3.6.1. Specifying target size using the absolute size of the pool Procedure Set the target size using the absolute size of the pool in bytes: For example, to instruct the system that mypool is expected to consume 100T of space: You can also set the target size of a pool at creation time by adding the optional --target-size-bytes <bytes> argument to the ceph osd pool create command. 3.6.2. Specifying target size using the total cluster capacity Procedure Set the target size using the ratio of the total cluster capacity: Syntax For Example: tells the system that the pool mypool is expected to consume 1.0 relative to the other pools with target_size_ratio set. If mypool is the only pool in the cluster, this means an expected use of 100% of the total capacity. If there is a second pool with target_size_ratio as 1.0, both pools would expect to use 50% of the cluster capacity. You can also set the target size of a pool at creation time by adding the optional --target-size-ratio <ratio> argument to the ceph osd pool create command. Note If you specify impossible target size values, for example, a capacity larger than the total cluster, or ratios that sum to more than 1.0, the cluster raises a POOL_TARGET_SIZE_RATIO_OVERCOMMITTED or POOL_TARGET_SIZE_BYTES_OVERCOMMITTED health warning. If you specify both target_size_ratio and target_size_bytes for a pool, the cluster considers only the ratio, and raises a POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO health warning. 3.7. Placement group command line interface The ceph CLI allows you to set and get the number of placement groups for a pool, view the PG map and retrieve PG statistics. 3.7.1. Setting number of placement groups in a pool To set the number of placement groups in a pool, you must specify the number of placement groups at the time you create the pool. See Creating a Pool for details. Once you set placement groups for a pool, you can increase the number of placement groups (but you cannot decrease the number of placement groups). To increase the number of placement groups, execute the following: Syntax Once you increase the number of placement groups, you must also increase the number of placement groups for placement ( pgp_num ) before your cluster will rebalance. The pgp_num should be equal to the pg_num . To increase the number of placement groups for placement, execute the following: Syntax 3.7.2. Getting number of placement groups in a pool To get the number of placement groups in a pool, execute the following: Syntax 3.7.3. Getting statistics for placement groups To get the statistics for the placement groups in your storag cluster, execute the following: Syntax Valid formats are plain (default) and json . 3.7.4. Getting statistics for stuck placement groups To get the statistics for all placement groups stuck in a specified state, execute the following: Syntax Inactive Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come up and in. Unclean Placement groups contain objects that are not replicated the desired number of times. They should be recovering. Stale Placement groups are in an unknown state - the OSDs that host them have not reported to the monitor cluster in a while (configured by mon_osd_report_timeout ). Valid formats are plain (default) and json . The threshold defines the minimum number of seconds the placement group is stuck before including it in the returned statistics (default 300 seconds). 3.7.5. Getting placement group maps To get the placement group map for a particular placement group, execute the following: Syntax Example Ceph returns the placement group map, the placement group, and the OSD status: 3.7.6. Scrubbing placement groups To scrub a placement group, execute the following: Syntax Ceph checks the primary and any replica nodes, generates a catalog of all objects in the placement group and compares them to ensure that no objects are missing or mismatched, and their contents are consistent. Assuming the replicas all match, a final semantic sweep ensures that all of the snapshot-related object metadata is consistent. Errors are reported via logs. 3.7.7. Marking unfound objects If the cluster has lost one or more objects, and you have decided to abandon the search for the lost data, you must mark the unfound objects as lost . If all possible locations have been queried and objects are still lost, you might have to give up on the lost objects. This is possible given unusual combinations of failures that allow the cluster to learn about writes that were performed before the writes themselves are recovered. Currently the only supported option is "revert", which will either roll back to a version of the object or (if it was a new object) forget about it entirely. To mark the "unfound" objects as "lost", execute the following: Syntax Important Use this feature with caution, because it might confuse applications that expect the object(s) to exist. | [
"osd pool default pg num = 100 osd pool default pgp num = 100",
"(OSDs * 100) Total PGs = ------------ pool size",
"(200 * 100) ----------- = 6667. Nearest power of 2: 8192 3",
"mon pg warn max per osd",
"ceph osd pool set POOL_NAME pg_autoscale_mode on",
"ceph osd pool set testpool pg_autoscale_mode on",
"ceph config set global osd_pool_default_pg_autoscale_mode MODE",
"ceph config set global osd_pool_default_pg_autoscale_mode on",
"ceph osd pool create POOL_NAME --bulk",
"ceph osd pool create testpool --bulk",
"ceph osd pool set ec_pool_overwrite bulk True Error EINVAL: expecting value 'true', 'false', '0', or '1'",
"ceph osd pool set POOL_NAME bulk true / false / 1 / 0",
"ceph osd pool set testpool bulk true",
"ceph osd pool get POOL_NAME bulk",
"ceph osd pool get testpool bulk bulk: true",
"ceph osd pool autoscale-status",
"POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK device_health_metrics 0 3.0 374.9G 0.0000 1.0 1 on False cephfs.cephfs.meta 24632 3.0 374.9G 0.0000 4.0 32 on False cephfs.cephfs.data 0 3.0 374.9G 0.0000 1.0 32 on False .rgw.root 1323 3.0 374.9G 0.0000 1.0 32 on False default.rgw.log 3702 3.0 374.9G 0.0000 1.0 32 on False default.rgw.control 0 3.0 374.9G 0.0000 1.0 32 on False default.rgw.meta 382 3.0 374.9G 0.0000 4.0 8 on False",
"ceph config set global mon_target_pg_per_osd number",
"ceph config set global mon_target_pg_per_osd 150",
"ceph osd pool get noautoscale",
"ceph osd pool set noautoscale",
"ceph osd pool unset noautoscale",
"ceph osd pool set pool-name target_size_bytes value",
"ceph osd pool set mypool target_size_bytes 100T",
"ceph osd pool set pool-name target_size_ratio ratio",
"ceph osd pool set mypool target_size_ratio 1.0",
"ceph osd pool set POOL_NAME pg_num PG_NUM",
"ceph osd pool set POOL_NAME pgp_num PGP_NUM",
"ceph osd pool get POOL_NAME pg_num",
"ceph pg dump [--format FORMAT ]",
"ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} INTERVAL",
"ceph pg map PG_ID",
"ceph pg map 1.6c",
"osdmap e13 pg 1.6c (1.6c) -> up [1,0] acting [1,0]",
"ceph pg scrub PG_ID",
"ceph pg PG_ID mark_unfound_lost revert|delete"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/storage_strategies_guide/placement-groups_strategy |
Chapter 2. CatalogSource [operators.coreos.com/v1alpha1] | Chapter 2. CatalogSource [operators.coreos.com/v1alpha1] Description CatalogSource is a repository of CSVs, CRDs, and operator packages. Type object Required metadata spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Required sourceType Property Type Description address string Address is a host that OLM can use to connect to a pre-existing registry. Format: <registry-host or ip>:<port> Only used when SourceType = SourceTypeGrpc. Ignored when the Image field is set. configMap string ConfigMap is the name of the ConfigMap to be used to back a configmap-server registry. Only used when SourceType = SourceTypeConfigmap or SourceTypeInternal. description string displayName string Metadata grpcPodConfig object GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. icon object image string Image is an operator-registry container image to instantiate a registry-server with. Only used when SourceType = SourceTypeGrpc. If present, the address field is ignored. priority integer Priority field assigns a weight to the catalog source to prioritize them so that it can be consumed by the dependency resolver. Usage: Higher weight indicates that this catalog source is preferred over lower weighted catalog sources during dependency resolution. The range of the priority value can go from positive to negative in the range of int32. The default value to a catalog source with unassigned priority would be 0. The catalog source with the same priority values will be ranked lexicographically based on its name. publisher string secrets array (string) Secrets represent set of secrets that can be used to access the contents of the catalog. It is best to keep this list small, since each will need to be tried for every catalog entry. sourceType string SourceType is the type of source updateStrategy object UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type 2.1.2. .spec.grpcPodConfig Description GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. Type object Property Type Description nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. priorityClassName string If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. securityContextConfig string SecurityContextConfig can be one of legacy or restricted . The CatalogSource's pod is either injected with the right pod.spec.securityContext and pod.spec.container[*].securityContext values to allow the pod to run in Pod Security Admission (PSA) restricted mode, or doesn't set these values at all, in which case the pod can only be run in PSA baseline or privileged namespaces. Currently if the SecurityContextConfig is unspecified, the default value of legacy is used. Specifying a value other than legacy or restricted result in a validation error. When using older catalog images, which could not be run in restricted mode, the SecurityContextConfig should be set to legacy . In a future version will the default will be set to restricted , catalog maintainers should rebuild their catalogs with a version of opm that supports running catalogSource pods in restricted mode to prepare for these changes. More information about PSA can be found here: https://kubernetes.io/docs/concepts/security/pod-security-admission/' tolerations array Tolerations are the catalog source's pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 2.1.3. .spec.grpcPodConfig.tolerations Description Tolerations are the catalog source's pod's tolerations. Type array 2.1.4. .spec.grpcPodConfig.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.5. .spec.icon Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 2.1.6. .spec.updateStrategy Description UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type Type object Property Type Description registryPoll object 2.1.7. .spec.updateStrategy.registryPoll Description Type object Property Type Description interval string Interval is used to determine the time interval between checks of the latest catalog source version. The catalog operator polls to see if a new version of the catalog source is available. If available, the latest image is pulled and gRPC traffic is directed to the latest catalog source. 2.1.8. .status Description Type object Property Type Description conditions array Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } configMapReference object connectionState object latestImageRegistryPoll string The last time the CatalogSource image registry has been polled to ensure the image is up-to-date message string A human readable message indicating details about why the CatalogSource is in this condition. reason string Reason is the reason the CatalogSource was transitioned to its current state. registryService object 2.1.9. .status.conditions Description Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. Type array 2.1.10. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.1.11. .status.configMapReference Description Type object Required name namespace Property Type Description lastUpdateTime string name string namespace string resourceVersion string uid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 2.1.12. .status.connectionState Description Type object Required lastObservedState Property Type Description address string lastConnect string lastObservedState string 2.1.13. .status.registryService Description Type object Property Type Description createdAt string port string protocol string serviceName string serviceNamespace string 2.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/catalogsources GET : list objects of kind CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources DELETE : delete collection of CatalogSource GET : list objects of kind CatalogSource POST : create a CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} DELETE : delete a CatalogSource GET : read the specified CatalogSource PATCH : partially update the specified CatalogSource PUT : replace the specified CatalogSource /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status GET : read status of the specified CatalogSource PATCH : partially update status of the specified CatalogSource PUT : replace status of the specified CatalogSource 2.2.1. /apis/operators.coreos.com/v1alpha1/catalogsources Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind CatalogSource Table 2.2. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty 2.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources Table 2.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CatalogSource Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CatalogSource Table 2.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK CatalogSourceList schema 401 - Unauthorized Empty HTTP method POST Description create a CatalogSource Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. Body parameters Parameter Type Description body CatalogSource schema Table 2.11. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 202 - Accepted CatalogSource schema 401 - Unauthorized Empty 2.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the CatalogSource namespace string object name and auth scope, such as for teams and projects Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CatalogSource Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CatalogSource Table 2.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.18. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CatalogSource Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Patch schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CatalogSource Table 2.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.23. Body parameters Parameter Type Description body CatalogSource schema Table 2.24. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty 2.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/catalogsources/{name}/status Table 2.25. Global path parameters Parameter Type Description name string name of the CatalogSource namespace string object name and auth scope, such as for teams and projects Table 2.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CatalogSource Table 2.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.28. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CatalogSource Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body Patch schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CatalogSource Table 2.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.33. Body parameters Parameter Type Description body CatalogSource schema Table 2.34. HTTP responses HTTP code Reponse body 200 - OK CatalogSource schema 201 - Created CatalogSource schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operatorhub_apis/catalogsource-operators-coreos-com-v1alpha1 |
Chapter 2. Image Registry Operator in OpenShift Container Platform | Chapter 2. Image Registry Operator in OpenShift Container Platform 2.1. Image Registry on cloud platforms and OpenStack The Image Registry Operator installs a single instance of the OpenShift image registry, and manages all registry configuration, including setting up registry storage. Note Storage is only automatically configured when you install an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM(R), or OpenStack. When you install or upgrade an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM(R), or OpenStack, the Image Registry Operator sets the spec.storage.managementState parameter to Managed . If the spec.storage.managementState parameter is set to Unmanaged , the Image Registry Operator takes no action related to storage. After the control plane deploys in the management cluster, the Operator creates a default configs.imageregistry.operator.openshift.io resource instance based on configuration detected in the cluster. If insufficient information is available to define a complete configs.imageregistry.operator.openshift.io resource, the incomplete resource is defined and the Operator updates the resource status with information about what is missing. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the ClusterOperator object for the Image Registry Operator. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning custom resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metadata in etcd. 2.2. Image Registry on bare metal, Nutanix, and vSphere 2.2.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.3. Image Registry Operator distribution across availability zones The default configuration of the Image Registry Operator spreads image registry pods across topology zones to prevent delayed recovery times in case of a complete zone failure where all pods are impacted. The Image Registry Operator defaults to the following when deployed with a zone-related topology constraint: Image Registry Operator deployed with a zone related topology constraint topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule The Image Registry Operator defaults to the following when deployed without a zone-related topology constraint, which applies to bare metal and vSphere instances: Image Registry Operator deployed without a zone related topology constraint topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule A cluster administrator can override the default topologySpreadConstraints by configuring the configs.imageregistry.operator.openshift.io/cluster spec file. In that case, only the constraints you provide apply. 2.4. Additional resources Configuring pod topology spread constraints 2.5. Image Registry Operator configuration parameters The configs.imageregistry.operator.openshift.io resource offers the following configuration parameters. Parameter Description managementState Managed : The Operator updates the registry as configuration resources are updated. Unmanaged : The Operator ignores changes to the configuration resources. Removed : The Operator removes the registry instance and tear down any storage that the Operator provisioned. logLevel Sets logLevel of the registry instance. Defaults to Normal . The following values for logLevel are supported: Normal Debug Trace TraceAll httpSecret Value needed by the registry to secure uploads, generated by default. operatorLogLevel The operatorLogLevel configuration parameter provides intent-based logging for the Operator itself and a simple way to manage coarse-grained logging choices that Operators must interpret for themselves. This configuration parameter defaults to Normal . It does not provide fine-grained control. The following values for operatorLogLevel are supported: Normal Debug Trace TraceAll proxy Defines the Proxy to be used when calling master API and upstream registries. affinity You can use the affinity parameter to configure pod scheduling preferences and constraints for Image Registry Operator pods. Affinity settings can use the podAffinity or podAntiAffinity spec. Both options can use either preferredDuringSchedulingIgnoredDuringExecution rules or requiredDuringSchedulingIgnoredDuringExecution rules. storage Storagetype : Details for configuring registry storage, for example S3 bucket coordinates. Normally configured by default. readOnly Indicates whether the registry instance should reject attempts to push new images or delete existing ones. requests API Request Limit details. Controls how many parallel requests a given registry instance will handle before queuing additional requests. defaultRoute Determines whether or not an external route is defined using the default hostname. If enabled, the route uses re-encrypt encryption. Defaults to false . routes Array of additional routes to create. You provide the hostname and certificate for the route. rolloutStrategy Defines rollout strategy for the image registry deployment. Defaults to RollingUpdate . replicas Replica count for the registry. disableRedirect Controls whether to route all data through the registry, rather than redirecting to the back end. Defaults to false . spec.storage.managementState The Image Registry Operator sets the spec.storage.managementState parameter to Managed on new installations or upgrades of clusters using installer-provisioned infrastructure on AWS or Azure. Managed : Determines that the Image Registry Operator manages underlying storage. If the Image Registry Operator's managementState is set to Removed , then the storage is deleted. If the managementState is set to Managed , the Image Registry Operator attempts to apply some default configuration on the underlying storage unit. For example, if set to Managed , the Operator tries to enable encryption on the S3 bucket before making it available to the registry. If you do not want the default settings to be applied on the storage you are providing, make sure the managementState is set to Unmanaged . Unmanaged : Determines that the Image Registry Operator ignores the storage settings. If the Image Registry Operator's managementState is set to Removed , then the storage is not deleted. If you provided an underlying storage unit configuration, such as a bucket or container name, and the spec.storage.managementState is not yet set to any value, then the Image Registry Operator configures it to Unmanaged . 2.6. Enable the Image Registry default route with the Custom Resource Definition In OpenShift Container Platform, the Registry Operator controls the OpenShift image registry feature. The Operator is defined by the configs.imageregistry.operator.openshift.io Custom Resource Definition (CRD). If you need to automatically enable the Image Registry default route, patch the Image Registry Operator CRD. Procedure Patch the Image Registry Operator CRD: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}' 2.7. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 2.8. Configuring storage credentials for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, storage credential configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry 2.9. Additional resources Configuring the registry for AWS user-provisioned infrastructure Configuring the registry for GCP user-provisioned infrastructure Configuring the registry for Azure user-provisioned infrastructure Configuring the registry for bare metal Configuring the registry for vSphere Configuring the registry for RHOSP Configuring the registry for Red Hat OpenShift Data Foundation Configuring the registry for Nutanix | [
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule",
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/registry/configuring-registry-operator |
Chapter 1. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer | Chapter 1. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer You can use the Assisted Installer to install a cluster on Oracle(R) Cloud Infrastructure (OCI). This method is recommended for most users, and requires an internet connection. If you want to set up the cluster manually or using other automation tools, or if you are working in a disconnected environment, you can use the Red Hat Agent-based Installer for the installation. For details, see Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Agent-based Installer . Note You can deploy OpenShift Container Platform on a Dedicated Region (Oracle documentation) the same as any region from Oracle Cloud Infrastructure (OCI). 1.1. About the Assisted Installer and OCI integration You can run cluster workloads on Oracle(R) Cloud Infrastructure (OCI) infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. Both Red Hat and Oracle test, validate, and support running OCI in an OpenShift Container Platform cluster on OCI. This section explains how to use the Assisted Installer to install an OpenShift Container Platform cluster on the OCI platform. The installation deploys cloud-native components such as Oracle Cloud Controller Manager (CCM) and Oracle Container Storage Interface (CSI), and integrates your cluster with OCI API resources such as instance node, load balancer, and storage. The installation process uses the OpenShift Container Platform discovery ISO image provided by Red Hat, together with the scripts and manifests provided and maintained by OCI. 1.1.1. Preinstallation considerations Before installing OpenShift Container Platform on Oracle Cloud Infrastructure (OCI), you must consider the following configuration choices. Deployment platforms The integration between OpenShift Container Platform and Oracle Cloud Infrastructure (OCI) is certified on both virtual machines (VMs) and bare-metal (BM) machines. Bare-metal installations using iSCSI boot drives require a secondary vNIC that is automatically created in the Terraform stack provided by Oracle. Before you create a virtual machine (VM) or bare-metal (BM) machine, you must identify the relevant OCI shape. For details, see the following resource: Cloud instance types (Red Hat Ecosystem Catalog portal) . VPU sizing recommendations To ensure the best performance conditions for your cluster workloads that operate on OCI, ensure that volume performance units (VPUs) for your block volume are sized for your workloads. The following list provides guidance for selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Basic environment: 500 GB, and 60 VPUs. Heavy production environment: More than 500 GB, and 100 or more VPUs. Consider reserving additional VPUs to provide sufficient capacity for updates and scaling activities. For more information about VPUs, see Volume Performance Units (Oracle documentation) . Instance sizing recommendations Find recommended values for compute instance CPU, memory, VPU, and volume size for OpenShift Container Platform nodes. For details, see Instance Sizing Recommendations for OpenShift Container Platform on OCI Nodes (Oracle documentation) . 1.1.2. Workflow The procedure for using the Assisted Installer in a connected environment to install a cluster on OCI is outlined below: In the OCI console, configure an OCI account to host the cluster: Create a new child compartment under an existing compartment. Create a new object storage bucket or use one provided by OCI. Download the stack file template stored locally. In the Assisted Installer console, set up a cluster: Enter the cluster configurations. Generate and download the discovery ISO image. In the OCI console, create the infrastructure: Upload the discovery ISO image to the OCI bucket. Create a Pre-Authenticated Request (PAR) for the ISO image. Upload the stack file template, and use it to create and apply the stack. Copy the custom manifest YAML file from the stack. In the Assisted Installer console, complete the cluster installation: Set roles for the cluster nodes. Upload the manifests provided by Oracle. Install the cluster. Important The steps for provisioning OCI resources are provided as an example only. You can also choose to create the required resources through other methods; the scripts are just an example. Installing a cluster with infrastructure that you provide requires knowledge of the cloud provider and the installation process on OpenShift Container Platform. You can access OCI configurations to complete these steps, or use the configurations to model your own custom script. Additional resources Assisted Installer for OpenShift Container Platform Installing a Cluster with Red Hat's Assisted Installer (Oracle documentation) Internet access for OpenShift Container Platform 1.2. Preparing the OCI environment Before installing OpenShift Container Platform using Assisted Installer, create the necessary resources and download the configuration file in the OCI environment. Prerequisites You have an OCI account to host the cluster. If you use a firewall and you plan to use a Telemetry service, you configured your firewall to allow OpenShift Container Platform to access the sites required. Procedure Log in to your Oracle Cloud Infrastructure (OCI) account with administrator privileges. Configure the account by defining the Cloud Accounts and Resources (Oracle documentation) . Ensure that you create the following resources: Create a child compartment for organizing, restricting access, and setting usage limits to OCI resources. For the full procedure, see Creating a Compartment (Oracle documentation) . Create a new object storage bucket into which you will upload the discovery ISO image. For the full procedure, see Creating an Object Storage Bucket (Oracle documentation) . Download the latest version of the create-cluster-vX.X.X.zip configuration file from the oracle-quickstart/oci-openshift repository. This file provides the infrastructure for the cluster and contains configurations for the following: Terraform Stacks : The Terraform stack code for provisioning OCI resources to create and manage OpenShift Container Platform clusters on OCI. Custom Manifests : The manifest files needed for the installation of OpenShift Container Platform clusters on OCI. Note To make any changes to the manifests, you can clone the entire Oracle GitHub repository and access the custom_manifests and terraform-stacks directories directly. For details, see Configuration Files (Oracle documentation) . 1.3. Using the Assisted Installer to generate an OCI-compatible discovery ISO image Create the cluster configuration and generate the discovery ISO image in the Assisted Installer web console. Prerequisites You created a child compartment and an object storage bucket on OCI. For details, see Preparing the OCI environment . You reviewed details about the OpenShift Container Platform installation and update processes. 1.3.1. Creating the cluster Set the cluster details. Procedure Log into Assisted Installer web console with your credentials. In the Red Hat OpenShift tile, select OpenShift . In the Red Hat OpenShift Container Platform tile, select Create Cluster . On the Cluster Type page, scroll down to the end of the Cloud tab, and select Oracle Cloud Infrastructure (virtual machines) . On the Create an OpenShift Cluster page, select the Interactive tile. On the Cluster Details page, complete the following fields: Field Action required Cluster name Specify the name of your cluster, such as oci . This is the same value as the cluster name in OCI. Base domain Specify the base domain of the cluster, such as openshift-demo.devcluster.openshift.com . This must be the same value as the zone DNS server in OCI. OpenShift version * For installations on virtual machines only, specify OpenShift 4.14 or a later version. * For installations that include bare metal machines, specify OpenShift 4.16 or a later version. CPU architecture Specify x86_64 or Arm64 . Integrate with external partner platforms Specify Oracle Cloud Infrastructure . After you specify this value, the Include custom manifests checkbox is selected by default and the Custom manifests page is added to the wizard. Leave the default settings for the remaining fields, and click . On the Operators page, click . 1.3.2. Generating the Discovery ISO image Generate and download the Discovery ISO image. Procedure On the Host Discovery page, click Add hosts and complete the following steps: For the Provisioning type field, select Minimal image file . For the SSH public key field, add the SSH public key from your local system, by copying the output of the following command: USD cat ~/.ssh/id_rsa.put The SSH public key will be installed on all OpenShift Container Platform control plane and compute nodes. Click Generate Discovery ISO to generate the discovery ISO image file. Click Download Discovery ISO to save the file to your local system. Additional resources Installation and update Configuring your firewall 1.4. Provisioning OCI infrastructure for your cluster When using the Assisted Installer to create details for your OpenShift Container Platform cluster, you specify these details in a Terraform stack. A stack is an OCI feature that automates the provisioning of all necessary OCI infrastructure resources that are required for installing an OpenShift Container Platform cluster on OCI. Prerequisites You downloaded the discovery ISO image to a local directory. For details, see "Using the Assisted Installer to generate an OCI-compatible discovery ISO image". You downloaded the Terraform stack template to a local directory. For details, see "Preparing the OCI environment". Procedure Log in to your Oracle Cloud Infrastructure (OCI) account. Upload the discovery ISO image from your local drive to the new object storage bucket you created. For the full procedure, see Uploading an Object Storage Object to a Bucket (Oracle documentation) . Locate the uploaded discovery ISO, and complete the following steps: Create a Pre-Authenticated Request (PAR) for the ISO from the adjacent options menu. Copy the generated URL to use as the OpenShift Image Source URI in the step. For the full procedure, see Creating a Pre-Authenticated Requests in Object Storage (Oracle documentation) . Create and apply the Terraform stack: Important The Terraform stack includes files for creating cluster resources and custom manifests. The stack also includes a script, and when you apply the stack, the script creates OCI resources, such as DNS records, an instance, and so on. For a list of the resources, see Terraform Defined Resources for OpenShift on OCI README file . Upload the Terraform stack template create-cluster-vX.X.X.zip to the new object storage bucket. Complete the stack information and click . Important Make sure that Cluster Name matches Cluster Name in Assisted Installer, and Zone DNS matches Base Domain in Assisted Installer. In the OpenShift Image Source URI field, paste the Pre-Authenticated Request URL link that you generated in the step. Ensure that the correct Compute Shape field value is defined, depending on whether you are installing on bare metal or a virtual machine. If not, select a different shape from the list. For details, see Compute Shapes (Oracle documentation) . Click Apply to apply the stack. For the full procedure, see Creating OpenShift Container Platform Infrastructure Using Resource Manager (Oracle documentation) . Copy the dynamic_custom_manifest.yml file from the Outputs page of the Terraform stack. Note The YAML file contains all the required manifests, concatenated and preformatted with the configuration values. For details, see the Custom Manifests README file . For the full procedure, see Getting the OpenShift Container Platform Custom Manifests for Installation (Oracle documentation) . 1.5. Completing the remaining Assisted Installer steps After you provision Oracle(R) Cloud Infrastructure (OCI) resources and upload OpenShift Container Platform custom manifest configuration files to OCI, you must complete the remaining cluster installation steps on the Assisted Installer before you can create an instance OCI. These steps include assigning node roles and adding custom manifests. 1.5.1. Assigning node roles Following host discovery, the role of all nodes appears as Auto-assign by default. Change each of the node roles to either Control Plane node or Worker . Prerequisites You created and applied the Terraform stack in OCI. For details, see "Provisioning OCI infrastructure for your cluster". Procedure From the Assisted Installer user interface, go to the Host discovery page. Under the Role column, select either Control plane node or Worker for each targeted hostname. Then click . Note Before continuing to the step, wait for each node to reach Ready status. Expand the node to verify that the hardware type is bare metal. Accept the default settings for the Storage and Networking pages. Then click . 1.5.2. Adding custom manifests Add the mandatory custom manifests provided by Oracle. For details, see Custom Manifests (Oracle documentation). Prerequisites You copied the dynamic_custom_manifest.yml file from the Terraform stack in OCI. For details, see "Provisioning OCI infrastructure for your cluster". Procedure On the Custom manifests page, in the Folder field, select manifests . This is the Assisted Installer folder where you want to save the custom manifest file. In the File name field, enter a filename, for example, dynamic_custom_manifest.yml . Paste the contents of the dynamic_custom_manifest.yml file that you copied from OCI: In the Content section, click the Paste content icon. If you are using Firefox, click OK to close the dialog box, and then press Ctrl+V . Otherwise, skip this step. Click to save the custom manifest. From the Review and create page, click Install cluster to create your OpenShift Container Platform cluster on OCI. After the cluster installation and initialization operations, the Assisted Installer indicates the completion of the cluster installation operation. For more information, see "Completing the installation" section in the Assisted Installer for OpenShift Container Platform document. Additional resources Assisted Installer for OpenShift Container Platform 1.6. Verifying a successful cluster installation on OCI Verify that your cluster was installed and is running effectively on Oracle(R) Cloud Infrastructure (OCI). Procedure From the Red Hat Hybrid Cloud Console , go to Clusters > Assisted Clusters and select your cluster's name. On the Installation Progress page, check that the Installation progress bar is at 100% and a message displays indicating Installation completed successfully . Under Host inventory , confirm that the status of all control plane and compute nodes is Installed . Note OpenShift Container Platform designates one of the control plane nodes as the bootstrap virtual machine, eliminating the need for a separate bootstrap machine. Click the Web Console URL, to access the OpenShift Container Platform web console. From the menu, select Compute > Nodes . Locate your node from the Nodes table. From the Terminal tab, verify that iSCSI appears to the serial number. From the Overview tab, check that your node has a Ready status. Select the YAML tab. Check the labels parameter, and verify that the listed labels apply to your configuration. For example, the topology.kubernetes.io/region=us-sanjose-1 label indicates in what OCI region the node was deployed. 1.7. Troubleshooting the installation of a cluster on OCI If you experience issues with using the Assisted Installer to install an OpenShift Container Platform cluster on Oracle(R) Cloud Infrastructure (OCI), read the following sections to troubleshoot common problems. The Ingress Load Balancer in OCI is not at a healthy status This issue is classed as a Warning because by using OCI to create a stack, you created a pool of compute nodes, 3 by default, that are automatically added as backend listeners for the Ingress Load Balancer. By default, the OpenShift Container Platform deploys 2 router pods, which are based on the default values from the OpenShift Container Platform manifest files. The Warning is expected because a mismatch exists with the number of router pods available, two, to run on the three compute nodes. Figure 1.1. Example of a Warning message that is under the Backend set information tab on OCI You do not need to modify the Ingress Load Balancer configuration. Instead, you can point the Ingress Load Balancer to specific compute nodes that operate in your cluster on OpenShift Container Platform. To do this, use placement mechanisms, such as annotations, on OpenShift Container Platform to ensure router pods only run on the compute nodes that you originally configured on the Ingress Load Balancer as backend listeners. OCI create stack operation fails with an Error: 400-InvalidParameter message On attempting to create a stack on OCI, you identified that the Logs section of the job outputs an error message. For example: Error: 400-InvalidParameter, DNS Label oci-demo does not follow Oracle requirements Suggestion: Please update the parameter(s) in the Terraform config as per error message DNS Label oci-demo does not follow Oracle requirements Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_vcn Go to the Install OpenShift with the Assisted Installer page on the Hybrid Cloud Console, and check the Cluster name field on the Cluster Details step. Remove any special characters, such as a hyphen ( - ), from the name, because these special characters are not compatible with the OCI naming conventions. For example, change oci-demo to ocidemo . Additional resources Troubleshooting OpenShift Container Platform on OCI (Oracle documentation) Installing an on-premise cluster using the Assisted Installer | [
"cat ~/.ssh/id_rsa.put",
"Error: 400-InvalidParameter, DNS Label oci-demo does not follow Oracle requirements Suggestion: Please update the parameter(s) in the Terraform config as per error message DNS Label oci-demo does not follow Oracle requirements Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_vcn"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_oci/installing-oci-assisted-installer |
Part V. Red Hat Build of OptaPlanner quick start guides | Part V. Red Hat Build of OptaPlanner quick start guides Red Hat Build of OptaPlanner provides the following quick start guides to demonstrate how OptaPlanner can integrate with different techologies: Red Hat Build of OptaPlanner on the Red Hat build of Quarkus platform: a school timetable quick start guide Red Hat Build of OptaPlanner on the Red Hat build of Quarkus platform: a vaccination appointment scheduler quick start guide Red Hat Build of OptaPlanner on the Red Hat build of Quarkus platform: an employee scheduler quick start guide Red Hat Build of OptaPlanner on Spring Boot: a school timetable quick start guide Red Hat Build of OptaPlanner with Java solvers: a school timetable quick start guide | null | https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/assembly-optaplanner-quickstarts_developing-solvers |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/red_hat_single_sign-on_for_openshift_on_openjdk/making-open-source-more-inclusive |
4.141. libpng | 4.141. libpng 4.141.1. RHSA-2012:0317 - Important: libpng security update Updated libpng and libpng10 packages that fix one security issue are now available for Red Hat Enterprise Linux 4, 5, and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The libpng packages contain a library of functions for creating and manipulating PNG (Portable Network Graphics) image format files. Security Fix CVE-2011-3026 A heap-based buffer overflow flaw was found in libpng. An attacker could create a specially-crafted PNG image that, when opened, could cause an application using libpng to crash or, possibly, execute arbitrary code with the privileges of the user running the application. Users of libpng and libpng10 should upgrade to these updated packages, which contain a backported patch to correct this issue. All running applications using libpng or libpng10 must be restarted for the update to take effect. 4.141.2. RHSA-2012:0407 - Moderate: libpng security update Updated libpng packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libpng packages contain a library of functions for creating and manipulating PNG (Portable Network Graphics) image format files. Security Fix CVE-2011-3045 A heap-based buffer overflow flaw was found in the way libpng processed compressed chunks in PNG image files. An attacker could create a specially-crafted PNG image file that, when opened, could cause an application using libpng to crash or, possibly, execute arbitrary code with the privileges of the user running the application. Users of libpng should upgrade to these updated packages, which correct this issue. For Red Hat Enterprise Linux 5, they contain a backported patch. For Red Hat Enterprise Linux 6, they upgrade libpng to version 1.2.48. All running applications using libpng must be restarted for the update to take effect. 4.141.3. RHSA-2012:0523 - Moderate: libpng security update Updated libpng packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libpng packages contain a library of functions for creating and manipulating PNG (Portable Network Graphics) image format files. Security Fix CVE-2011-3048 A heap-based buffer overflow flaw was found in the way libpng processed tEXt chunks in PNG image files. An attacker could create a specially-crafted PNG image file that, when opened, could cause an application using libpng to crash or, possibly, execute arbitrary code with the privileges of the user running the application. Users of libpng should upgrade to these updated packages, which correct this issue. For Red Hat Enterprise Linux 5, they contain a backported patch. For Red Hat Enterprise Linux 6, they upgrade libpng to version 1.2.49. All running applications using libpng must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libpng |
Chapter 1. Service binding | Chapter 1. Service binding Note Deprecation of OpenShift Service Binding Operator The OpenShift Service Binding Operator is deprecated in OpenShift Container Platform (OCP) 4.13 and later and is planned to be removed in a future OCP release. The following chapter provides information about service binding and workload projection that were added to Red Hat build of Quarkus in version 2.7.5 and are in the state of Technology Preview in version 3.15. Generally, OpenShift applications and services also referred to as deployable workloads, need to be connected to other services for retrieving additional information, such as service URLs or credentials. The Service Binding Operator facilitates retrieval of the necessary information, which is then made available to applications and service-binding tools like the quarkus-kubernetes-service-binding extension through environment variables without directly influencing or determining the use of the extension tool itself. Quarkus supports the Service binding specification for Kubernetes to bind services to applications. Specifically, Quarkus implements the workload projection part of the specification, enabling applications to bind to services like databases or brokers, requiring only minimal configuration. To enable service binding for the available extensions, include the quarkus-kubernetes-service-binding extension to the application dependencies. You can use the following extensions for service binding and for workload projection: quarkus-jdbc-mariadb quarkus-jdbc-mssql quarkus-jdbc-mysql quarkus-jdbc-postgresql quarkus-mongo-client - Technology Preview quarkus-kafka-client quarkus-messaging-kafka quarkus-reactive-mssql-client - Technology Preview quarkus-reactive-mysql-client quarkus-reactive-pg-client 1.1. Workload projection Workload projection is a process of obtaining the configuration for services from the Kubernetes cluster. This configuration takes the form of directory structures that follow certain conventions and are attached to an application or a service as a mounted volume. The kubernetes-service-binding extension uses this directory structure to create configuration sources, which allows you to configure additional modules, such as databases or message brokers. You can use workload projection during application development to connect your application to a development database or other locally run services without changing the application code or configuration. For an example of a workload projection where the directory structure is included in the test resources and passed to an integration test, see the Kubernetes Service Binding datasource GitHub repository. Note The k8s-sb directory is the root of all service bindings. In this example, only one database called fruit-db is intended to be bound. This binding database has the type file, which specifies postgresql as the database type, while the other files in the directory provide the necessary information to establish the connection. When your Red Hat build of Quarkus project obtains information from SERVICE_BINDING_ROOT environment variables that are set by OpenShift Container Platform, you can locate generated configuration files that are present in the file system and use them to map the configuration-file values to properties of certain extensions. 1.2. Introduction to Service Binding Operator The Service Binding Operator is an Operator that implements the Service Binding Specification for Kubernetes and is meant to simplify the binding of services to an application. Containerized applications that support workload projection obtain service binding information in the form of volume mounts. The Service Binding Operator reads binding service information and mounts it to the application containers that need it. The correlation between application and bound services is expressed through the ServiceBinding resources, which declares the intent of what services are meant to be bound to what application. The Service Binding Operator watches for ServiceBinding resources, which inform the Operator what applications are meant to be bound with what services. When a listed application is deployed, the Service Binding Operator collects all the binding information that must be passed to the application and then upgrades the application container by attaching a volume mount with the binding information. The Service Binding Operator completes the following actions: Observes ServiceBinding resources for workloads bound to a particular service. Applies the binding information to the workload using volume mounts. The following chapter describes the automatic and semi-automatic service binding approaches and their use cases. The kubernetes-service-binding extension generates a ServiceBinding resource with either approach. With the semi-automatic approach, users must manually provide a configuration for target services. With the automatic approach, no additional configuration is needed for a limited set of services generating the ServiceBinding resource. Additional resources Workload projection 1.3. Semi-automatic service binding A service binding process starts with a user specification of required services that will be bound to a certain application. This expression is summarized in the ServiceBinding resource generated by the kubernetes-service-binding extension. The use of the kubernetes-service-binding extensions helps users to generate ServiceBinding resources with minimal configuration, therefore simplifying the process overall. The Service Binding Operator responsible for the binding process then reads the information from the ServiceBinding resource and mounts the required files to a container accordingly. An example of the ServiceBinding resource: apiVersion: binding.operators.coreos.com/v1beta1 kind: ServiceBinding metadata: name: binding-request namespace: service-binding-demo spec: application: name: java-app group: apps version: v1 resource: deployments services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: Database name: db-demo id: postgresDB Note The quarkus-kubernetes-service-binding extension provides a more compact way of expressing the same information. For example: quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database After adding the earlier configuration properties inside your application.properties , the quarkus-kubernetes , in combination with the quarkus-kubernetes-service-binding extension, automatically generates the ServiceBinding resource. The earlier mentioned db-demo property-configuration identifier now has a double role and also completes the following actions: Correlates and groups api-version and kind properties together. Defines the name property for the custom resource, which you can edit later if needed. For example: quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database quarkus.kubernetes-service-binding.services.db-demo.name=my-db Additional resources How to use Quarkus with the Service Binding Operator List of bindable Operators 1.4. Generating a ServiceBinding custom resource by using the semi-automatic method You can generate a ServiceBinding resource semi-automatically. The following procedure shows the OpenShift Container Platform deployment process, including the installation of operators for configuring and deploying an application. In this procedure, you install the Service Binding Operator and the PostgreSQL Operator from Crunchy Data . Important PostgreSQL Operator is a third-party component. For PostgreSQL Operator support policies and | [
"apiVersion: binding.operators.coreos.com/v1beta1 kind: ServiceBinding metadata: name: binding-request namespace: service-binding-demo spec: application: name: java-app group: apps version: v1 resource: deployments services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: Database name: db-demo id: postgresDB",
"quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database",
"quarkus.kubernetes-service-binding.services.db-demo.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.db-demo.kind=Database quarkus.kubernetes-service-binding.services.db-demo.name=my-db",
"get csv -w",
"get csv -w",
"new-project demo",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: openshift: true image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.2-1 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi",
"apply -f ~/pg-cluster.yml",
"get pods -n demo",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:create -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=3.15.3.SP1-redhat-00002 -DprojectGroupId=org.acme -DprojectArtifactId=todo-example -DclassName=\"org.acme.TodoResource\" -Dpath=\"/todo\"",
"./mvnw quarkus:add-extension -Dextensions=\"rest-jackson,jdbc-postgresql,hibernate-orm-panache,openshift,kubernetes-service-binding\"",
"package org.acme; import jakarta.persistence.Column; import jakarta.persistence.Entity; import io.quarkus.hibernate.orm.panache.PanacheEntity; @Entity public class Todo extends PanacheEntity { @Column(length = 40, unique = true) public String title; public boolean completed; public Todo() { } public Todo(String title, Boolean completed) { this.title = title; } }",
"package org.acme; import jakarta.transaction.Transactional; import jakarta.ws.rs.*; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.core.Response.Status; import java.util.List; @Path(\"/todo\") public class TodoResource { @GET @Path(\"/\") public List<Todo> getAll() { return Todo.listAll(); } @GET @Path(\"/{id}\") public Todo get(@PathParam(\"id\") Long id) { Todo entity = Todo.findById(id); if (entity == null) { throw new WebApplicationException(\"Todo with id of \" + id + \" does not exist.\", Status.NOT_FOUND); } return entity; } @POST @Path(\"/\") @Transactional public Response create(Todo item) { item.persist(); return Response.status(Status.CREATED).entity(item).build(); } @GET @Path(\"/{id}/complete\") @Transactional public Response complete(@PathParam(\"id\") Long id) { Todo entity = Todo.findById(id); entity.id = id; entity.completed = true; return Response.ok(entity).build(); } @DELETE @Transactional @Path(\"/{id}\") public Response delete(@PathParam(\"id\") Long id) { Todo entity = Todo.findById(id); if (entity == null) { throw new WebApplicationException(\"Todo with id of \" + id + \" does not exist.\", Status.NOT_FOUND); } entity.delete(); return Response.noContent().build(); } }",
"quarkus.kubernetes-service-binding.services.my-db.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.my-db.kind=PostgresCluster quarkus.kubernetes-service-binding.services.my-db.name=hippo quarkus.datasource.db-kind=postgresql quarkus.hibernate-orm.database.generation=drop-and-create quarkus.hibernate-orm.sql-load-script=import.sql",
"INSERT INTO todo(id, title, completed) VALUES (nextval('hibernate_sequence'), 'Finish the blog post', false);",
"mvn clean install -Dquarkus.kubernetes.deploy=true -DskipTests",
"get pods -n demo -w",
"port-forward service/todo-example 8080:80",
"http://localhost:8080/todo",
"quarkus.datasource.db-kind=postgresql",
"services: - apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster name: postgresql",
"quarkus.datasource.fruits-db.db-kind=postgresql",
"services: - apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster name: fruits-db",
"quarkus.datasource.fruits-db.db-kind=mysql",
"services: - apiVersion: pxc.percona.com/v1-9-0 kind: PerconaXtraDBCluster name: fruits-db",
"quarkus.datasource.db-kind=postgresql quarkus.kubernetes-service-binding.services.postgresql.api-version=postgres-operator.crunchydata.com/v1beta2",
"quarkus.datasource.fruits-db.db-kind=postgresql",
"quarkus.kubernetes-service-binding.services.fruits-db.api-version=postgres-operator.crunchydata.com/v1beta1 quarkus.kubernetes-service-binding.services.fruits-db.kind=PostgresCluster quarkus.kubernetes-service-binding.services.fruits-db.name=fruits-db"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/service_binding/assembly_service-binding_quarkus-service-binding |
Chapter 120. SQL | Chapter 120. SQL Both producer and consumer are supported The SQL component allows you to work with databases using JDBC queries. The difference between this component and JDBC component is that in case of SQL the query is a property of the endpoint and it uses message payload as parameters passed to the query. This component uses spring-jdbc behind the scenes for the actual SQL handling. The SQL component also supports: a JDBC based repository for the Idempotent Consumer EIP pattern. See further below. a JDBC based repository for the Aggregator EIP pattern. See further below. 120.1. Dependencies When using sql with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-sql-starter</artifactId> </dependency> 120.2. URI format Note This component can be used as a Transactional Client . The SQL component uses the following endpoint URI notation: You can use named parameters by using :`#name_of_the_parameter` style as shown: When using named parameters, Camel will lookup the names from, in the given precedence: from message body if its a java.util.Map from message headers If a named parameter cannot be resolved, then an exception is thrown. You can use Simple expressions as parameters as shown: Notice that the standard ? symbol that denotes the parameters to an SQL query is substituted with the # symbol, because the ? symbol is used to specify options for the endpoint. The ? symbol replacement can be configured on endpoint basis. You can externalize your SQL queries to files in the classpath or file system as shown: And the myquery.sql file is in the classpath and is just a plain text -- this is a comment select * from table where id = :#USD{exchangeProperty.myId} order by name In the file you can use multilines and format the SQL as you wish. And also use comments such as the - dash line. 120.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 120.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 120.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 120.4. Component Options The SQL component supports 5 options, which are listed below. Name Description Default Type dataSource (common) Autowired Sets the DataSource to use to communicate with the database. DataSource bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean usePlaceholder (advanced) Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true. true boolean 120.5. Endpoint Options The SQL endpoint is configured using URI syntax: with the following path and query parameters: 120.5.1. Path Parameters (1 parameters) Name Description Default Type query (common) Required Sets the SQL query to perform. You can externalize the query by using file: or classpath: as prefix and specify the location of the file. String 120.5.2. Query Parameters (45 parameters) Name Description Default Type allowNamedParameters (common) Whether to allow using named parameters in the queries. true boolean dataSource (common) Autowired Sets the DataSource to use to communicate with the database at endpoint level. DataSource outputClass (common) Specify the full package and class name to use as conversion when outputType=SelectOne. String outputHeader (common) Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. String outputType (common) Make the output of consumer or producer to SelectList as List of Map, or SelectOne as single Java object in the following way: a) If the query has only single column, then that JDBC Column object is returned. (such as SELECT COUNT( ) FROM PROJECT will return a Long object. b) If the query has more than one column, then it will return a Map of that result. c) If the outputClass is set, then it will convert the query result into an Java bean object by calling all the setters that match the column names. It will assume your class has a default constructor to create instance with. d) If the query resulted in more than one rows, it throws an non-unique result exception. StreamList streams the result of the query using an Iterator. This can be used with the Splitter EIP in streaming mode to process the ResultSet in streaming fashion. Enum values: SelectOne SelectList StreamList SelectList SqlOutputType separator (common) The separator to use when parameter values is taken from message body (if the body is a String type), to be inserted at # placeholders. Notice if you use named parameters, then a Map type is used instead. The default value is comma. , char breakBatchOnConsumeFail (consumer) Sets whether to break batch if onConsume failed. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean expectedUpdateCount (consumer) Sets an expected update count to validate when using onConsume. -1 int maxMessagesPerPoll (consumer) Sets the maximum number of messages to poll. int onConsume (consumer) After processing each row then this query can be executed, if the Exchange was processed successfully, for example to mark the row as processed. The query can have parameter. String onConsumeBatchComplete (consumer) After processing the entire batch, this query can be executed to bulk update rows etc. The query cannot have parameters. String onConsumeFailed (consumer) After processing each row then this query can be executed, if the Exchange failed, for example to mark the row as failed. The query can have parameter. String routeEmptyResultSet (consumer) Sets whether empty resultset should be allowed to be sent to the hop. Defaults to false. So the empty resultset will be filtered out. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean transacted (consumer) Enables or disables transaction. If enabled then if processing an exchange failed then the consumer breaks out processing any further exchanges to cause a rollback eager. false boolean useIterator (consumer) Sets how resultset should be delivered to route. Indicates delivery as either a list or individual object. defaults to true. true boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy processingStrategy (consumer (advanced)) Allows to plugin to use a custom org.apache.camel.component.sql.SqlProcessingStrategy to execute queries when the consumer has processed the rows/batch. SqlProcessingStrategy batch (producer) Enables or disables batch mode. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean noop (producer) If set, will ignore the results of the SQL query and use the existing IN message as the OUT message for the continuation of processing. false boolean useMessageBodyForSql (producer) Whether to use the message body as the SQL and then headers for parameters. If this option is enabled then the SQL in the uri is not used. Note that query parameters in the message body are represented by a question mark instead of a # symbol. false boolean alwaysPopulateStatement (advanced) If enabled then the populateStatement method from org.apache.camel.component.sql.SqlPrepareStatementStrategy is always invoked, also if there is no expected parameters to be prepared. When this is false then the populateStatement is only invoked if there is 1 or more expected parameters to be set; for example this avoids reading the message body/headers for SQL queries with no parameters. false boolean parametersCount (advanced) If set greater than zero, then Camel will use this count value of parameters to replace instead of querying via JDBC metadata API. This is useful if the JDBC vendor could not return correct parameters count, then user may override instead. int placeholder (advanced) Specifies a character that will be replaced to in SQL query. Notice, that it is simple String.replaceAll() operation and no SQL parsing is involved (quoted strings will also change). # String prepareStatementStrategy (advanced) Allows to plugin to use a custom org.apache.camel.component.sql.SqlPrepareStatementStrategy to control preparation of the query and prepared statement. SqlPrepareStatementStrategy templateOptions (advanced) Configures the Spring JdbcTemplate with the key/values from the Map. Map usePlaceholder (advanced) Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. true boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 120.6. Treatment of the message body The SQL component tries to convert the message body to an object of java.util.Iterator type and then uses this iterator to fill the query parameters (where each query parameter is represented by a # symbol (or configured placeholder) in the endpoint URI). If the message body is not an array or collection, the conversion results in an iterator that iterates over only one object, which is the body itself. For example, if the message body is an instance of java.util.List , the first item in the list is substituted into the first occurrence of # in the SQL query, the second item in the list is substituted into the second occurrence of #, and so on. If batch is set to true , then the interpretation of the inbound message body changes slightly - instead of an iterator of parameters, the component expects an iterator that contains the parameter iterators; the size of the outer iterator determines the batch size. You can use the option useMessageBodyForSql that allows to use the message body as the SQL statement, and then the SQL parameters must be provided in a header with the key SqlConstants.SQL_PARAMETERS . This allows the SQL component to work more dynamically as the SQL query is from the message body. Use templating (such as Velocity , Freemarker ) for conditional processing, e.g. to include or exclude where clauses depending on the presence of query parameters. 120.7. Result of the query For select operations, the result is an instance of List<Map<String, Object>> type, as returned by the JdbcTemplate.queryForList() method. For update operations, a NULL body is returned as the update operation is only set as a header and never as a body. Note See Header Values for more information on the update operation. By default, the result is placed in the message body. If the outputHeader parameter is set, the result is placed in the header. This is an alternative to using a full message enrichment pattern to add headers, it provides a concise syntax for querying a sequence or some other small value into a header. It is convenient to use outputHeader and outputType together: from("jms:order.inbox") .to("sql:select order_seq.nextval from dual?outputHeader=OrderId&outputType=SelectOne") .to("jms:order.booking"); 120.8. Using StreamList The producer supports outputType=StreamList that uses an iterator to stream the output of the query. This allows to process the data in a streaming fashion which for example can be used by the Splitter EIP to process each row one at a time, and load data from the database as needed. from("direct:withSplitModel") .to("sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel") .to("log:stream") .split(body()).streaming() .to("log:row") .to("mock:result") .end(); 120.9. Header values When performing update operations, the SQL Component stores the update count in the following message headers: Header Description CamelSqlUpdateCount The number of rows updated for update operations, returned as an Integer object. This header is not provided when using outputType=StreamList. CamelSqlRowCount The number of rows returned for select operations, returned as an Integer object. This header is not provided when using outputType=StreamList. CamelSqlQuery Query to execute. This query takes precedence over the query specified in the endpoint URI. Note that query parameters in the header are represented by a ? instead of a # symbol When performing insert operations, the SQL Component stores the rows with the generated keys and number of these rows in the following message headers: Header Description CamelSqlGeneratedKeysRowCount The number of rows in the header that contains generated keys. CamelSqlGeneratedKeyRows Rows that contains the generated keys (a list of maps of keys). 120.10. Generated keys If you insert data using SQL INSERT, then the RDBMS may support auto generated keys. You can instruct the SQL producer to return the generated keys in headers. To do that set the header CamelSqlRetrieveGeneratedKeys=true . Then the generated keys will be provided as headers with the keys listed in the table above. To specify which generated columns should be retrieved, set the header CamelSqlGeneratedColumns to a String[] or int[] , indicating the column names or indexes, respectively. Some databases requires this, such as Oracle. It may also be necessary to use the parametersCount option if the driver cannot correctly determine the number of parameters. You can see more details in this unit test . 120.11. DataSource You can set a reference to a DataSource in the URI directly: sql:select * from table where id=# order by name?dataSource=#myDS 120.12. Using named parameters In the given route below, we want to get all the projects from the projects table. Notice the SQL query has 2 named parameters, :#lic and :#min. Camel will then lookup for these parameters from the message body or message headers. Notice in the example above we set two headers with constant value for the named parameters: from("direct:projects") .setHeader("lic", constant("ASF")) .setHeader("min", constant(123)) .to("sql:select * from projects where license = :#lic and id > :#min order by id") Though if the message body is a java.util.Map then the named parameters will be taken from the body. from("direct:projects") .to("sql:select * from projects where license = :#lic and id > :#min order by id") 120.13. Using expression parameters in producers In the given route below, we want to get all the project from the database. It uses the body of the exchange for defining the license and uses the value of a property as the second parameter. from("direct:projects") .setBody(constant("ASF")) .setProperty("min", constant(123)) .to("sql:select * from projects where license = :#USD{body} and id > :#USD{exchangeProperty.min} order by id") 120.13.1. Using expression parameters in consumers When using the SQL component as consumer, you can now also use expression parameters (simple language) to build dynamic query parameters, such as calling a method on a bean to retrieve an id, date or something. For example in the sample below we call the nextId method on the bean myIdGenerator: from("sql:select * from projects where id = :#USD{bean:myIdGenerator.nextId}") .to("mock:result"); And the bean has the following method: public static class MyIdGenerator { private int id = 1; public int nextId() { return id++; } Notice that there is no existing Exchange with message body and headers, so the simple expression you can use in the consumer are most useable for calling bean methods as in this example. 120.14. Using IN queries with dynamic values The SQL producer allows to use SQL queries with IN statements where the IN values is dynamic computed. For example from the message body or a header etc. To use IN you need to: prefix the parameter name with in: add ( ) around the parameter An example explains this better. The following query is used: -- this is a comment select * from projects where project in (:#in:names) order by id In the following route: from("direct:query") .to("sql:classpath:sql/selectProjectsIn.sql") .to("log:query") .to("mock:query"); Then the IN query can use a header with the key names with the dynamic values such as: // use an array template.requestBodyAndHeader("direct:query", "Hi there!", "names", new String[]{"Camel", "AMQ"}); // use a list List<String> names = new ArrayList<String>(); names.add("Camel"); names.add("AMQ"); template.requestBodyAndHeader("direct:query", "Hi there!", "names", names); // use a string separated values with comma template.requestBodyAndHeader("direct:query", "Hi there!", "names", "Camel,AMQ"); The query can also be specified in the endpoint instead of being externalized (notice that externalizing makes maintaining the SQL queries easier) from("direct:query") .to("sql:select * from projects where project in (:#in:names) order by id") .to("log:query") .to("mock:query"); 120.15. Using the JDBC based idempotent repository In this section we will use the JDBC based idempotent repository. Note Abstract class There is an abstract class org.apache.camel.processor.idempotent.jdbc.AbstractJdbcMessageIdRepository you can extend to build custom JDBC idempotent repository. First we have to create the database table which will be used by the idempotent repository. We use the following schema: CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100) ) We added the createdAt column: CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100), createdAt TIMESTAMP ) Note The SQL Server TIMESTAMP type is a fixed-length binary-string type. It does not map to any of the JDBC time types: DATE , TIME , or TIMESTAMP . When working with concurrent consumers it is crucial to create a unique constraint on the columns processorName and messageId. Because the syntax for this constraint differs from database to database, we do not show it here. 120.15.1. Customize the JDBC idempotency repository You have a few options to tune the org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository for your needs: Parameter Default Value Description createTableIfNotExists true Defines whether or not Camel should try to create the table if it doesn't exist. tableName CAMEL_MESSAGEPROCESSED To use a custom table name instead of the default name: CAMEL_MESSAGEPROCESSED. tableExistsString SELECT 1 FROM CAMEL_MESSAGEPROCESSED WHERE 1 = 0 This query is used to figure out whether the table already exists or not. It must throw an exception to indicate the table doesn't exist. createString CREATE TABLE CAMEL_MESSAGEPROCESSED (processorName VARCHAR(255), messageId VARCHAR(100), createdAt TIMESTAMP) The statement which is used to create the table. queryString SELECT COUNT(*) FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ? The query which is used to figure out whether the message already exists in the repository (the result is not equals to '0'). It takes two parameters. This first one is the processor name ( String ) and the second one is the message id ( String ). insertString INSERT INTO CAMEL_MESSAGEPROCESSED (processorName, messageId, createdAt) VALUES (?, ?, ?) The statement which is used to add the entry into the table. It takes three parameter. The first one is the processor name ( String ), the second one is the message id ( String ) and the third one is the timestamp ( java.sql.Timestamp ) when this entry was added to the repository. deleteString DELETE FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ? The statement which is used to delete the entry from the database. It takes two parameter. This first one is the processor name ( String ) and the second one is the message id ( String ). The option tableName can be used to use the default SQL queries but with a different table name. However if you want to customize the SQL queries then you can configure each of them individually. 120.15.2. Orphan Lock aware Jdbc IdempotentRepository One of the limitations of org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository is that it does not handle orphan locks resulting from JVM crash or non graceful shutdown. This can result in unprocessed files/messages if this is implementation is used with camel-file, camel-ftp etc. if you need to address orphan locks processing then use org.apache.camel.processor.idempotent.jdbc.JdbcOrphanLockAwareIdempotentRepository . This repository keeps track of the locks held by an instance of the application. For each lock held, the application will send keep alive signals to the lock repository resulting in updating the createdAt column with the current Timestamp. When an application instance tries to acquire a lock if the, then there are three possibilities exist : lock entry does not exist then the lock is provided using the base implementation of JdbcMessageIdRepository . lock already exists and the createdAt < System.currentTimeMillis() - lockMaxAgeMillis. In this case it is assumed that an active instance has the lock and the lock is not provided to the new instance requesting the lock lock already exists and the createdAt > = System.currentTimeMillis() - lockMaxAgeMillis. In this case it is assumed that there is no active instance which has the lock and the lock is provided to the requesting instance. The reason behind is that if the original instance which had the lock, if it was still running, it would have updated the Timestamp on createdAt using its keepAlive mechanism This repository has two additional configuration parameters Parameter Description lockMaxAgeMillis This refers to the duration after which the lock is considered orphaned i.e. if the currentTimestamp - createdAt >= lockMaxAgeMillis then lock is orphaned. lockKeepAliveIntervalMillis The frequency at which keep alive updates are done to createdAt Timestamp column. 120.15.3. Caching Jdbc IdempotentRepository Some SQL implementations are not fast on a per query basis. The JdbcMessageIdRepository implementation does its idempotent checks individually within SQL transactions. Checking a mere 100 keys can take minutes. The JdbcCachedMessageIdRepository preloads an in-memory cache on start with the entire list of keys. This cache is then checked first before passing through to the original implementation. As with all cache implementations, there are considerations that should be made with regard to stale data and your specific usage. 120.16. Using the JDBC based aggregation repository JdbcAggregationRepository is an AggregationRepository which on the fly persists the aggregated messages. This ensures that you will not loose messages, as the default aggregator will use an in memory only AggregationRepository . The JdbcAggregationRepository allows together with Camel to provide persistent support for the Aggregator. Only when an Exchange has been successfully processed it will be marked as complete which happens when the confirm method is invoked on the AggregationRepository . This means if the same Exchange fails again it will be kept retried until it success. You can use option maximumRedeliveries to limit the maximum number of redelivery attempts for a given recovered Exchange. You must also set the deadLetterUri option so Camel knows where to send the Exchange when the maximumRedeliveries was hit. You can see some examples in the unit tests of camel-sql, for example JdbcAggregateRecoverDeadLetterChannelTest.java 120.16.1. Database To be operational, each aggregator uses two table: the aggregation and completed one. By convention the completed has the same name as the aggregation one suffixed with "_COMPLETED" . The name must be configured in the Spring bean with the RepositoryName property. In the following example aggregation will be used. The table structure definition of both table are identical: in both case a String value is used as key ( id ) whereas a Blob contains the exchange serialized in byte array. However one difference should be remembered: the id field does not have the same content depending on the table. In the aggregation table id holds the correlation Id used by the component to aggregate the messages. In the completed table, id holds the id of the exchange stored in corresponding the blob field. Here is the SQL query used to create the tables, just replace "aggregation" with your aggregator repository name. CREATE TABLE aggregation ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, constraint aggregation_pk PRIMARY KEY (id) ); CREATE TABLE aggregation_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, constraint aggregation_completed_pk PRIMARY KEY (id) ); 120.17. Storing body and headers as text You can configure the JdbcAggregationRepository to store message body and select(ed) headers as String in separate columns. For example to store the body, and the following two headers companyName and accountName use the following SQL: CREATE TABLE aggregationRepo3 ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_pk PRIMARY KEY (id) ); CREATE TABLE aggregationRepo3_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_completed_pk PRIMARY KEY (id) ); And then configure the repository to enable this behavior as shown below: <bean id="repo3" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="repositoryName" value="aggregationRepo3"/> <property name="transactionManager" ref="txManager3"/> <property name="dataSource" ref="dataSource3"/> <!-- configure to store the message body and following headers as text in the repo --> <property name="storeBodyAsText" value="true"/> <property name="headersToStoreAsText"> <list> <value>companyName</value> <value>accountName</value> </list> </property> </bean> 120.17.1. Codec (Serialization) Since they can contain any type of payload, Exchanges are not serializable by design. It is converted into a byte array to be stored in a database BLOB field. All those conversions are handled by the JdbcCodec class. One detail of the code requires your attention: the ClassLoadingAwareObjectInputStream . The ClassLoadingAwareObjectInputStream has been reused from the Apache ActiveMQ project. It wraps an ObjectInputStream and use it with the ContextClassLoader rather than the currentThread one. The benefit is to be able to load classes exposed by other bundles. This allows the exchange body and headers to have custom types object references. 120.17.2. Transaction A Spring PlatformTransactionManager is required to orchestrate transaction. 120.17.2.1. Service (Start/Stop) The start method verify the connection of the database and the presence of the required tables. If anything is wrong it will fail during starting. 120.17.3. Aggregator configuration Depending on the targeted environment, the aggregator might need some configuration. As you already know, each aggregator should have its own repository (with the corresponding pair of table created in the database) and a data source. If the default lobHandler is not adapted to your database system, it can be injected with the lobHandler property. Here is the declaration for Oracle: <bean id="lobHandler" class="org.springframework.jdbc.support.lob.OracleLobHandler"> <property name="nativeJdbcExtractor" ref="nativeJdbcExtractor"/> </bean> <bean id="nativeJdbcExtractor" class="org.springframework.jdbc.support.nativejdbc.CommonsDbcpNativeJdbcExtractor"/> <bean id="repo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="transactionManager" ref="transactionManager"/> <property name="repositoryName" value="aggregation"/> <property name="dataSource" ref="dataSource"/> <!-- Only with Oracle, else use default --> <property name="lobHandler" ref="lobHandler"/> </bean> 120.17.4. Optimistic locking You can turn on optimisticLocking and use this JDBC based aggregation repository in a clustered environment where multiple Camel applications shared the same database for the aggregation repository. If there is a race condition there JDBC driver will throw a vendor specific exception which the JdbcAggregationRepository can react upon. To know which caused exceptions from the JDBC driver is regarded as an optimistick locking error we need a mapper to do this. Therefore there is a org.apache.camel.processor.aggregate.jdbc.JdbcOptimisticLockingExceptionMapper allows you to implement your custom logic if needed. There is a default implementation org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper which works as follows: The following check is done: If the caused exception is an SQLException then the SQLState is checked if starts with 23. If the caused exception is a DataIntegrityViolationException If the caused exception class name has "ConstraintViolation" in its name. Optional checking for FQN class name matches if any class names has been configured. You can in addition add FQN classnames, and if any of the caused exception (or any nested) equals any of the FQN class names, then its an optimistick locking error. Here is an example, where we define 2 extra FQN class names from the JDBC vendor. <bean id="repo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="transactionManager" ref="transactionManager"/> <property name="repositoryName" value="aggregation"/> <property name="dataSource" ref="dataSource"/> <property name="jdbcOptimisticLockingExceptionMapper" ref="myExceptionMapper"/> </bean> <!-- use the default mapper with extraFQN class names from our JDBC driver --> <bean id="myExceptionMapper" class="org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper"> <property name="classNames"> <util:set> <value>com.foo.sql.MyViolationExceptoion</value> <value>com.foo.sql.MyOtherViolationExceptoion</value> </util:set> </property> </bean> 120.17.5. Propagation behavior JdbcAggregationRepository uses two distinct transaction templates from Spring-TX. One is read-only and one is used for read-write operations. However, when using JdbcAggregationRepository within a route that itself uses <transacted /> and there's common PlatformTransactionManager used, there may be a need to configure propagation behavior used by transaction templates inside JdbcAggregationRepository . Here's a way to do it: <bean id="repo" class="org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository"> <property name="propagationBehaviorName" value="PROPAGATION_NESTED" /> </bean> Propagation is specified by constants of org.springframework.transaction.TransactionDefinition interface, so propagationBehaviorName is convenient setter that allows to use names of the constants. 120.17.6. PostgreSQL case There's special database that may cause problems with optimistic locking used by JdbcAggregationRepository . PostgreSQL marks connection as invalid in case of data integrity violation exception (the one with SQLState 23505). This makes the connection effectively unusable within nested transaction. Details can be found in the document .. org.apache.camel.processor.aggregate.jdbc.PostgresAggregationRepository extends JdbcAggregationRepository and uses special INSERT .. ON CONFLICT .. statement to provide optimistic locking behavior. This statement is (with default aggregation table definition): INSERT INTO aggregation (id, exchange) values (?, ?) ON CONFLICT DO NOTHING Details can be found in PostgreSQL documentation . When this clause is used, java.sql.PreparedStatement.executeUpdate() call returns 0 instead of throwing SQLException with SQLState=23505. Further handling is exactly the same as with generic JdbcAggregationRepository , but without marking PostgreSQL connection as invalid. 120.18. Camel Sql Starter A starter module is available to spring-boot users. When using the starter, the DataSource can be directly configured using spring-boot properties. # Example for a mysql datasource spring.datasource.url=jdbc:mysql://localhost/test spring.datasource.username=dbuser spring.datasource.password=dbpass spring.datasource.driver-class-name=com.mysql.jdbc.Driver To use this feature, add the following dependencies to your spring boot pom.xml file: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-sql-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> <version>USD{spring-boot-version}</version> </dependency> You should also include the specific database driver, if needed. 120.19. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.component.sql-stored.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.sql-stored.enabled Whether to enable auto configuration of the sql-stored component. This is enabled by default. Boolean camel.component.sql-stored.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.sql.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.sql.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.sql.enabled Whether to enable auto configuration of the sql component. This is enabled by default. Boolean camel.component.sql.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.sql.use-placeholder Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-sql-starter</artifactId> </dependency>",
"sql:select * from table where id=# order by name[?options]",
"sql:select * from table where id=:#myId order by name[?options]",
"sql:select * from table where id=:#USD{exchangeProperty.myId} order by name[?options]",
"sql:classpath:sql/myquery.sql[?options]",
"-- this is a comment select * from table where id = :#USD{exchangeProperty.myId} order by name",
"sql:query",
"from(\"jms:order.inbox\") .to(\"sql:select order_seq.nextval from dual?outputHeader=OrderId&outputType=SelectOne\") .to(\"jms:order.booking\");",
"from(\"direct:withSplitModel\") .to(\"sql:select * from projects order by id?outputType=StreamList&outputClass=org.apache.camel.component.sql.ProjectModel\") .to(\"log:stream\") .split(body()).streaming() .to(\"log:row\") .to(\"mock:result\") .end();",
"sql:select * from table where id=# order by name?dataSource=#myDS",
"from(\"direct:projects\") .setHeader(\"lic\", constant(\"ASF\")) .setHeader(\"min\", constant(123)) .to(\"sql:select * from projects where license = :#lic and id > :#min order by id\")",
"from(\"direct:projects\") .to(\"sql:select * from projects where license = :#lic and id > :#min order by id\")",
"from(\"direct:projects\") .setBody(constant(\"ASF\")) .setProperty(\"min\", constant(123)) .to(\"sql:select * from projects where license = :#USD{body} and id > :#USD{exchangeProperty.min} order by id\")",
"from(\"sql:select * from projects where id = :#USD{bean:myIdGenerator.nextId}\") .to(\"mock:result\");",
"public static class MyIdGenerator { private int id = 1; public int nextId() { return id++; }",
"-- this is a comment select * from projects where project in (:#in:names) order by id",
"from(\"direct:query\") .to(\"sql:classpath:sql/selectProjectsIn.sql\") .to(\"log:query\") .to(\"mock:query\");",
"// use an array template.requestBodyAndHeader(\"direct:query\", \"Hi there!\", \"names\", new String[]{\"Camel\", \"AMQ\"}); // use a list List<String> names = new ArrayList<String>(); names.add(\"Camel\"); names.add(\"AMQ\"); template.requestBodyAndHeader(\"direct:query\", \"Hi there!\", \"names\", names); // use a string separated values with comma template.requestBodyAndHeader(\"direct:query\", \"Hi there!\", \"names\", \"Camel,AMQ\");",
"from(\"direct:query\") .to(\"sql:select * from projects where project in (:#in:names) order by id\") .to(\"log:query\") .to(\"mock:query\");",
"CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100) )",
"CREATE TABLE CAMEL_MESSAGEPROCESSED ( processorName VARCHAR(255), messageId VARCHAR(100), createdAt TIMESTAMP )",
"CREATE TABLE aggregation ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, constraint aggregation_pk PRIMARY KEY (id) ); CREATE TABLE aggregation_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, constraint aggregation_completed_pk PRIMARY KEY (id) );",
"CREATE TABLE aggregationRepo3 ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_pk PRIMARY KEY (id) ); CREATE TABLE aggregationRepo3_completed ( id varchar(255) NOT NULL, exchange blob NOT NULL, version BIGINT NOT NULL, body varchar(1000), companyName varchar(1000), accountName varchar(1000), constraint aggregationRepo3_completed_pk PRIMARY KEY (id) );",
"<bean id=\"repo3\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"repositoryName\" value=\"aggregationRepo3\"/> <property name=\"transactionManager\" ref=\"txManager3\"/> <property name=\"dataSource\" ref=\"dataSource3\"/> <!-- configure to store the message body and following headers as text in the repo --> <property name=\"storeBodyAsText\" value=\"true\"/> <property name=\"headersToStoreAsText\"> <list> <value>companyName</value> <value>accountName</value> </list> </property> </bean>",
"<bean id=\"lobHandler\" class=\"org.springframework.jdbc.support.lob.OracleLobHandler\"> <property name=\"nativeJdbcExtractor\" ref=\"nativeJdbcExtractor\"/> </bean> <bean id=\"nativeJdbcExtractor\" class=\"org.springframework.jdbc.support.nativejdbc.CommonsDbcpNativeJdbcExtractor\"/> <bean id=\"repo\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"transactionManager\" ref=\"transactionManager\"/> <property name=\"repositoryName\" value=\"aggregation\"/> <property name=\"dataSource\" ref=\"dataSource\"/> <!-- Only with Oracle, else use default --> <property name=\"lobHandler\" ref=\"lobHandler\"/> </bean>",
"<bean id=\"repo\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"transactionManager\" ref=\"transactionManager\"/> <property name=\"repositoryName\" value=\"aggregation\"/> <property name=\"dataSource\" ref=\"dataSource\"/> <property name=\"jdbcOptimisticLockingExceptionMapper\" ref=\"myExceptionMapper\"/> </bean> <!-- use the default mapper with extraFQN class names from our JDBC driver --> <bean id=\"myExceptionMapper\" class=\"org.apache.camel.processor.aggregate.jdbc.DefaultJdbcOptimisticLockingExceptionMapper\"> <property name=\"classNames\"> <util:set> <value>com.foo.sql.MyViolationExceptoion</value> <value>com.foo.sql.MyOtherViolationExceptoion</value> </util:set> </property> </bean>",
"<bean id=\"repo\" class=\"org.apache.camel.processor.aggregate.jdbc.JdbcAggregationRepository\"> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_NESTED\" /> </bean>",
"INSERT INTO aggregation (id, exchange) values (?, ?) ON CONFLICT DO NOTHING",
"Example for a mysql datasource spring.datasource.url=jdbc:mysql://localhost/test spring.datasource.username=dbuser spring.datasource.password=dbpass spring.datasource.driver-class-name=com.mysql.jdbc.Driver",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-sql-starter</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> <version>USD{spring-boot-version}</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-sql-component-starter |
5.32. coolkey | 5.32. coolkey 5.32.1. RHBA-2012:0948 - coolkey bug fix update Updated coolkey packages that resolve two issues are now available for Red Hat Enterprise Linux 6. Coolkey is a smart card support library for the CoolKey, CAC, and PIV smart cards. Bug Fixes BZ# 700907 Prior to this update, Coolkey did not recognize Spice virtualized CAC cards unless the card contained at least 3 certificates. This update fixes this issue so that cards with one or two certificates are recognized by Coolkey as expected. Note that this issue may also have affected some non-virtualized CAC cards. BZ# 713132 Under certain error conditions, Coolkey could leak memory data because a variable buffer was not being freed properly. With this update, the aforementioned buffer is properly freed, and memory leaks no longer occur. All users of coolkey are advised to upgrade to these updated packages, which resolve these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/coolkey |
Installing and using Red Hat build of OpenJDK 8 for Windows | Installing and using Red Hat build of OpenJDK 8 for Windows Red Hat build of OpenJDK 8 Red Hat Developer Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/installing_and_using_red_hat_build_of_openjdk_8_for_windows/index |
Chapter 1. Red Hat Advanced Cluster Security for Kubernetes 4.6 Documentation | Chapter 1. Red Hat Advanced Cluster Security for Kubernetes 4.6 Documentation Welcome to the official Red Hat Advanced Cluster Security for Kubernetes documentation, where you can learn about Red Hat Advanced Cluster Security for Kubernetes and start exploring its features. To go to the Red Hat Advanced Cluster Security for Kubernetes documentation, you can use one of the following methods: Use the left navigation bar to browse the documentation. Select the task that interests you from the contents of this Welcome page. 1.1. Installation activities Understanding installation methods for different platforms : Determine the best installation method for your product and platform. 1.2. Operating Red Hat Advanced Cluster Security for Kubernetes Explore various activities you can perform by using Red Hat Advanced Cluster Security for Kubernetes: Viewing the dashboard : Find information about the Red Hat Advanced Cluster Security for Kubernetes real-time interactive dashboard. Learn how to use it to view key metrics from all your hosts, containers, and services. Compliance feature overview : Understand how to run automated checks and validate compliance based on industry standards, including CIS, NIST, PCI, and HIPAA. Managing vulnerabilities : Learn how to identify and prioritize vulnerabilities for remediation. Responding to violations : Learn how to view policy violations, drill down to the actual cause of the violation, and take corrective actions. 1.3. Configuring Red Hat Advanced Cluster Security for Kubernetes Explore the following typical configuration tasks in Red Hat Advanced Cluster Security for Kubernetes: Adding custom certificates : Learn how to use a custom TLS certificate with Red Hat Advanced Cluster Security for Kubernetes. After you set up a certificate, users and API clients do not have to bypass the certificate security warnings. Backing up Red Hat Advanced Cluster Security for Kubernetes : Learn how to perform manual and automated data backups for Red Hat Advanced Cluster Security for Kubernetes and use these backups for data restoration in the case of an infrastructure disaster or corrupt data. Configuring automatic upgrades for secured clusters : Stay up to date by automating the upgrade process for each secured cluster. 1.4. Integrating with other products Learn how to integrate Red Hat Advanced Cluster Security for Kubernetes with the following products: Integrating with PagerDuty : Learn how to integrate with PagerDuty and forward alerts from Red Hat Advanced Cluster Security for Kubernetes to PagerDuty. Integrating with Slack : Learn how to integrate with Slack and forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Slack. Integrating with Sumo Logic : Learn how to integrate with Sumo Logic and forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Sumo Logic. Integrating by using the syslog protocol : Learn how to integrate with a security information and event management (SIEM) system or a syslog collector for data retention and security investigations. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/about/welcome-index |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/deploying_jboss_eap_on_amazon_web_services/proc_providing-feedback-on-red-hat-documentation_default |
Chapter 3. Managing GFS2 | Chapter 3. Managing GFS2 This chapter describes the tasks and commands for managing GFS2 and consists of the following sections: Section 3.1, "Creating a GFS2 File System" Section 3.2, "Mounting a GFS2 File System" Section 3.3, "Unmounting a GFS2 File System" Section 3.4, "GFS2 Quota Management" Section 3.5, "Growing a GFS2 File System" Section 3.6, "Adding Journals to a GFS2 File System" Section 3.7, "Data Journaling" Section 3.8, "Configuring atime Updates" Section 3.9, "Suspending Activity on a GFS2 File System" Section 3.10, "Repairing a GFS2 File System" Section 3.11, "The GFS2 Withdraw Function" 3.1. Creating a GFS2 File System You create a GFS2 file system with the mkfs.gfs2 command. You can also use the mkfs command with the -t gfs2 option specified. A file system is created on an activated LVM volume. The following information is required to run the mkfs.gfs2 command: Lock protocol/module name (the lock protocol for a cluster is lock_dlm ) Cluster name (needed when specifying the LockTableName parameter) Number of journals (one journal required for each node that may be mounting the file system) When creating a GFS2 file system, you can use the mkfs.gfs2 command directly, or you can use the mkfs command with the -t parameter specifying a file system of type gfs2 , followed by the GFS2 file system options. Note Once you have created a GFS2 file system with the mkfs.gfs2 command, you cannot decrease the size of the file system. You can, however, increase the size of an existing file system with the gfs2_grow command, as described in Section 3.5, "Growing a GFS2 File System" . Usage When creating a clustered GFS2 file system, you can use either of the following formats: When creating a local GFS2 file system, you can use either of the following formats: Note As of the Red Hat Enterprise Linux 6 release, Red Hat does not support the use of GFS2 as a single-node file system. Warning Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption. LockProtoName Specifies the name of the locking protocol to use. The lock protocol for a cluster is lock_dlm . LockTableName This parameter is specified for a GFS2 file system in a cluster configuration. It has two parts separated by a colon (no spaces) as follows: ClusterName:FSName ClusterName , the name of the cluster for which the GFS2 file system is being created. FSName , the file system name, can be 1 to 16 characters long. The name must be unique for all lock_dlm file systems over the cluster, and for all file systems ( lock_dlm and lock_nolock ) on each local node. Number Specifies the number of journals to be created by the mkfs.gfs2 command. One journal is required for each node that mounts the file system. For GFS2 file systems, more journals can be added later without growing the file system, as described in Section 3.6, "Adding Journals to a GFS2 File System" . BlockDevice Specifies a logical or physical volume. Examples In these examples, lock_dlm is the locking protocol that the file system uses, since this is a clustered file system. The cluster name is alpha , and the file system name is mydata1 . The file system contains eight journals and is created on /dev/vg01/lvol0 . In these examples, a second lock_dlm file system is made, which can be used in cluster alpha . The file system name is mydata2 . The file system contains eight journals and is created on /dev/vg01/lvol1 . Complete Options Table 3.1, "Command Options: mkfs.gfs2 " describes the mkfs.gfs2 command options (flags and parameters). Table 3.1. Command Options: mkfs.gfs2 Flag Parameter Description -c Megabytes Sets the initial size of each journal's quota change file to Megabytes . -D Enables debugging output. -h Help. Displays available options. -J Megabytes Specifies the size of the journal in megabytes. Default journal size is 128 megabytes. The minimum size is 8 megabytes. Larger journals improve performance, although they use more memory than smaller journals. -j Number Specifies the number of journals to be created by the mkfs.gfs2 command. One journal is required for each node that mounts the file system. If this option is not specified, one journal will be created. For GFS2 file systems, you can add additional journals at a later time without growing the file system. -O Prevents the mkfs.gfs2 command from asking for confirmation before writing the file system. -p LockProtoName Specifies the name of the locking protocol to use. Recognized locking protocols include: lock_dlm - The standard locking module, required for a clustered file system. lock_nolock - Used when GFS2 is acting as a local file system (one node only). -q Quiet. Do not display anything. -r Megabytes Specifies the size of the resource groups in megabytes. The minimum resource group size is 32 megabytes. The maximum resource group size is 2048 megabytes. A large resource group size may increase performance on very large file systems. If this is not specified, mkfs.gfs2 chooses the resource group size based on the size of the file system: average size file systems will have 256 megabyte resource groups, and bigger file systems will have bigger RGs for better performance. -t LockTableName A unique identifier that specifies the lock table field when you use the lock_dlm protocol; the lock_nolock protocol does not use this parameter. This parameter has two parts separated by a colon (no spaces) as follows: ClusterName:FSName . ClusterName is the name of the cluster for which the GFS2 file system is being created; only members of this cluster are permitted to use this file system. FSName , the file system name, can be 1 to 16 characters in length, and the name must be unique among all file systems in the cluster. -u Megabytes Specifies the initial size of each journal's unlinked tag file. -V Displays command version information. | [
"mkfs.gfs2 -p LockProtoName -t LockTableName -j NumberJournals BlockDevice",
"mkfs -t gfs2 -p LockProtoName -t LockTableName -j NumberJournals BlockDevice",
"mkfs.gfs2 -p LockProtoName -j NumberJournals BlockDevice",
"mkfs -t gfs2 -p LockProtoName -j NumberJournals BlockDevice",
"mkfs.gfs2 -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0",
"mkfs -t gfs2 -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0",
"mkfs.gfs2 -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1",
"mkfs -t gfs2 -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/ch-manage |
Getting Started with Camel Extensions for Quarkus | Getting Started with Camel Extensions for Quarkus Red Hat build of Apache Camel Extensions for Quarkus 2.13 Getting Started with Camel Extensions for Quarkus | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/getting_started_with_camel_extensions_for_quarkus/index |
Part II. Maintenance tasks | Part II. Maintenance tasks | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/maintenance_tasks |
5.103. hsqldb | 5.103. hsqldb 5.103.1. RHBA-2012:0993 - hsqldb enhancement update Updated hsqldb packages that add an enhancement are now available for Red Hat Enterprise Linux 6. HSQLDB is a relational database engine written in Java, with a JDBC driver, supporting a subset of ANSI-92 SQL. It offers a small (about 100k), fast database engine which offers both in-memory and disk-based tables. Embedded and server modes are available. Additionally, it includes tools such as a minimal web server, in-memory query and management tools (which can be run as applets or servlets), and a number of demonstration examples. Enhancement BZ# 816735 HSQLdb has been updated to add stubs for JDBC 4.1 Users of hsqldb are advised to upgrade to these updated packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/hsqldb |
3.8. Aggressive Link Power Management | 3.8. Aggressive Link Power Management Aggressive Link Power Management (ALPM) is a power-saving technique that helps the disk save power by setting a SATA link to the disk to a low-power setting during idle time (that is when there is no I/O). ALPM automatically sets the SATA link back to an active power state once I/O requests are queued to that link. Power savings introduced by ALPM come at the expense of disk latency. As such, you should only use ALPM if you expect the system to experience long periods of idle I/O time. ALPM is only available on SATA controllers that use the Advanced Host Controller Interface (AHCI). For more information about AHCI, see http://www.intel.com/technology/serialata/ahci.htm . When available, ALPM is enabled by default. ALPM has three modes: min_power This mode sets the link to its lowest power state (SLUMBER) when there is no I/O on the disk. This mode is useful for times when an extended period of idle time is expected. medium_power This mode sets the link to the second lowest power state (PARTIAL) when there is no I/O on the disk. This mode is designed to allow transitions in link power states (for example during times of intermittent heavy I/O and idle I/O) with as small impact on performance as possible. medium_power mode allows the link to transition between PARTIAL and fully-powered (that is "ACTIVE") states, depending on the load. Note that it is not possible to transition a link directly from PARTIAL to SLUMBER and back; in this case, either power state cannot transition to the other without transitioning through the ACTIVE state first. max_performance ALPM is disabled; the link does not enter any low-power state when there is no I/O on the disk. To check whether your SATA host adapters actually support ALPM you can check if the file /sys/class/scsi_host/host*/link_power_management_policy exists. To change the settings simply write the values described in this section to these files or display the files to check for the current setting. Important Setting ALPM to min_power or medium_power will automatically disable the "Hot Plug" feature. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/alpm |
Chapter 3. Getting started with virtualization on IBM POWER | Chapter 3. Getting started with virtualization on IBM POWER You can use KVM virtualization when using RHEL 8 on IBM POWER8 or POWER9 hardware. However, enabling the KVM hypervisor on your system requires extra steps compared to virtualization on AMD64 and Intel64 architectures. Certain RHEL 8 virtualization features also have different or restricted functionality on IBM POWER. Apart from the information in the following sections, using virtualization on IBM POWER works the same as on AMD64 and Intel 64. Therefore, you can see other RHEL 8 virtualization documentation for more information when using virtualization on IBM POWER. 3.1. Enabling virtualization on IBM POWER To set up a KVM hypervisor and create virtual machines (VMs) on an IBM POWER8 or IBM POWER9 system running RHEL 8, follow the instructions below. Prerequisites RHEL 8 is installed and registered on your host machine. The following minimum system resources are available: 6 GB free disk space for the host, plus another 6 GB for each intended VM. 2 GB of RAM for the host, plus another 2 GB for each intended VM. 4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive during high load. Your CPU machine type must support IBM POWER virtualization. To verify this, query the platform information in your /proc/cpuinfo file. If the output of this command includes the PowerNV entry, you are running a PowerNV machine type and can use virtualization on IBM POWER. Procedure Load the KVM-HV kernel module Verify that the KVM kernel module is loaded If KVM loaded successfully, the output of this command includes kvm_hv . Install the packages in the virtualization module: Install the virt-install package: Start the libvirtd service. Verification Verify that your system is prepared to be a virtualization host: If all virt-host-validate checks return a PASS value, your system is prepared for creating VMs . If any of the checks return a FAIL value, follow the displayed instructions to fix the problem. If any of the checks return a WARN value, consider following the displayed instructions to improve virtualization capabilities. Troubleshooting If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output: However, VMs on such a host system will fail to boot, rather than have performance problems. To work around this, you can change the <domain type> value in the XML configuration of the VM to qemu . Note, however, that Red Hat does not support VMs that use the qemu domain type, and setting this is highly discouraged in production environments. 3.2. How virtualization on IBM POWER differs from AMD64 and Intel 64 KVM virtualization in RHEL 8 on IBM POWER systems is different from KVM on AMD64 and Intel 64 systems in a number of aspects, notably: Memory requirements VMs on IBM POWER consume more memory. Therefore, the recommended minimum memory allocation for a virtual machine (VM) on an IBM POWER host is 2GB RAM. Display protocols The SPICE protocol is not supported on IBM POWER systems. To display the graphical output of a VM, use the VNC protocol. In addition, only the following virtual graphics card devices are supported: vga - only supported in -vga std mode and not in -vga cirrus mode. virtio-vga virtio-gpu SMBIOS SMBIOS configuration is not available. Memory allocation errors POWER8 VMs, including compatibility mode VMs, may fail with an error similar to: This is significantly more likely to occur on VMs that use RHEL 7.3 and prior as the guest OS. To fix the problem, increase the CMA memory pool available for the guest's hashed page table (HPT) by adding kvm_cma_resv_ratio= memory to the host's kernel command line, where memory is the percentage of the host memory that should be reserved for the CMA pool (defaults to 5). Huge pages Transparent huge pages (THPs) do not provide any notable performance benefits on IBM POWER8 VMs. However, IBM POWER9 VMs can benefit from THPs as expected. In addition, the size of static huge pages on IBM POWER8 systems are 16 MiB and 16 GiB, as opposed to 2 MiB and 1 GiB on AMD64, Intel 64, and IBM POWER9. As a consequence, to migrate a VM configured with static huge pages from an IBM POWER8 host to an IBM POWER9 host, you must first set up 1GiB huge pages on the VM. kvm-clock The kvm-clock service does not have to be configured for time management in VMs on IBM POWER9. pvpanic IBM POWER9 systems do not support the pvpanic device. However, an equivalent functionality is available and activated by default on this architecture. To enable it in a VM, use the <on_crash> XML configuration element with the preserve value. In addition, make sure to remove the <panic> element from the <devices> section, as its presence can lead to the VM failing to boot on IBM POWER systems. Single-threaded host On IBM POWER8 systems, the host machine must run in single-threaded mode to support VMs. This is automatically configured if the qemu-kvm packages are installed. However, VMs running on single-threaded hosts can still use multiple threads. Peripheral devices A number of peripheral devices supported on AMD64 and Intel 64 systems are not supported on IBM POWER systems, or a different device is supported as a replacement. Devices used for PCI-E hierarchy, including ioh3420 and xio3130-downstream , are not supported. This functionality is replaced by multiple independent PCI root bridges provided by the spapr-pci-host-bridge device. UHCI and EHCI PCI controllers are not supported. Use OHCI and XHCI controllers instead. IDE devices, including the virtual IDE CD-ROM ( ide-cd ) and the virtual IDE disk ( ide-hd ), are not supported. Use the virtio-scsi and virtio-blk devices instead. Emulated PCI NICs ( rtl8139 ) are not supported. Use the virtio-net device instead. Sound devices, including intel-hda , hda-output , and AC97 , are not supported. USB redirection devices, including usb-redir and usb-tablet , are not supported. v2v and p2v The virt-v2v and virt-p2v utilities are only supported on the AMD64 and Intel 64 architecture, and are not provided on IBM POWER. Additional sources For a comparison of selected supported and unsupported virtualization features across system architectures supported by Red Hat, see An overview of virtualization features support in RHEL 8 . | [
"grep ^platform /proc/cpuinfo/ platform : PowerNV",
"modprobe kvm_hv",
"lsmod | grep kvm",
"yum module install virt",
"yum install virt-install",
"systemctl start libvirtd",
"virt-host-validate [...] QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'memory' controller mount-point : PASS [...] QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for cgroup 'blkio' controller mount-point : PASS QEMU: Checking if IOMMU is enabled by kernel : PASS",
"QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)",
"qemu-kvm: Failed to allocate KVM HPT of order 33 (try smaller maxmem?): Cannot allocate memory"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/getting-started-with-virtualization-in-rhel-8-on-ibm-power_configuring-and-managing-virtualization |
Chapter 5. KafkaClusterSpec schema reference | Chapter 5. KafkaClusterSpec schema reference Used in: KafkaSpec Full list of KafkaClusterSpec schema properties Configures a Kafka cluster. 5.1. listeners Use the listeners property to configure listeners to provide access to Kafka brokers. Example configuration of a plain (unencrypted) listener without authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false # ... zookeeper: # ... 5.2. config Use the config properties to configure Kafka broker options as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka documentation . However, AMQ Streams takes care of configuring and managing options related to the following, which cannot be changed: Security (encryption, authentication, and authorization) Listener configuration Broker ID configuration Configuration of log data directories Inter-broker communication ZooKeeper connectivity Properties with the following prefixes cannot be set: advertised. authorizer. broker. controller cruise.control.metrics.reporter.bootstrap. cruise.control.metrics.topic host.name inter.broker.listener.name listener. listeners. log.dir password. port process.roles sasl. security. servers,node.id ssl. super.user zookeeper.clientCnxnSocket zookeeper.connect zookeeper.set.acl zookeeper.ssl If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by AMQ Streams: Any ssl configuration for supported TLS versions and cipher suites Configuration for the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection Cruise Control metrics properties: cruise.control.metrics.topic.num.partitions cruise.control.metrics.topic.replication.factor cruise.control.metrics.topic.retention.ms cruise.control.metrics.topic.auto.create.retries cruise.control.metrics.topic.auto.create.timeout.ms cruise.control.metrics.topic.min.insync.replicas Controller properties: controller.quorum.election.backoff.max.ms controller.quorum.election.timeout.ms controller.quorum.fetch.timeout.ms Example Kafka broker configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 zookeeper.connection.timeout.ms: 6000 # ... 5.3. brokerRackInitImage When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image used for this container can be configured using the brokerRackInitImage property. When the brokerRackInitImage field is missing, the following images are used in order of priority: Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration. registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 container image. Example brokerRackInitImage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest # ... Note Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container registry used by AMQ Streams. In this case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly. 5.4. logging Kafka has its own configurable loggers, which include the following: log4j.logger.org.I0Itec.zkclient.ZkClient log4j.logger.org.apache.zookeeper log4j.logger.kafka log4j.logger.org.apache.kafka log4j.logger.kafka.request.logger log4j.logger.kafka.network.Processor log4j.logger.kafka.server.KafkaApis log4j.logger.kafka.network.RequestChannelUSD log4j.logger.kafka.controller log4j.logger.kafka.log.LogCleaner log4j.logger.state.change.logger log4j.logger.kafka.authorizer.logger Kafka uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: # ... logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties # ... Any available loggers that are not configured have their level set to OFF . If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 5.5. KafkaClusterSpec schema properties Property Description version The kafka broker version. Defaults to 3.5.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the cluster. integer image The docker image for the pods. The default value depends on the configured Kafka.spec.kafka.version . string listeners Configures listeners of Kafka brokers. GenericKafkaListener array config Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers,node.id, process.roles, controller. (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size,ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation,cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms,cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms,cruise.control.metrics.topic.min.insync.replicas,controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms). map storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim, jbod]. EphemeralStorage , PersistentClaimStorage , JbodStorage authorization Authorization configuration for Kafka brokers. The type depends on the value of the authorization.type property within the given object, which must be one of [simple, opa, keycloak, custom]. KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom rack Configuration of the broker.rack broker config. Rack brokerRackInitImage The image of the init container used for initializing the broker.rack . string livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options for Kafka brokers. KafkaJmxOptions resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics logging Logging configuration for Kafka. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated. KafkaClusterTemplate | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false # zookeeper: #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 zookeeper.connection.timeout.ms: 6000 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaClusterSpec-reference |
Chapter 4. New features | Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 9.1. 4.1. Installer and image creation Automatic FCP SCSI LUN scanning support in installer The installer can now use the automatic LUN scanning when attaching FCP SCSI LUNs on IBM Z systems. Automatic LUN scanning is available for FCP devices operating in NPIV mode, if it is not disabled through the zfcp.allow_lun_scan kernel module parameter. It is enabled by default. It provides access to all SCSI devices found in the storage area network attached to the FCP device with the specified device bus ID. It is not necessary to specify WWPN and FCP LUNs anymore and it is sufficient to provide just the FCP device bus ID. (BZ#1937031) Image builder on-premise now supports the /boot partition customization Image builder on-premise version now supports building images with custom /boot mount point partition size. You can specify the size of the /boot mount point partition in the blueprint customization, to increase the size of the /boot partition in case the default boot partition size is too small. For example: (JIRA:RHELPLAN-130379) Added the --allow-ssh kickstart option to enable password-based SSH root logins During the graphical installation, you have an option to enable password-based SSH root logins. This functionality was not available in kickstart installations. With this update, an option --allow-ssh has been added to the rootpw kickstart command. This option enables the root user to login to the system using SSH with a password. ( BZ#2083269 ) Boot loader menu hidden by default The GRUB boot loader is now configured to hide the boot menu by default. This results in a smoother boot experience. The boot menu is hidden in all of the following cases: When you restart the system from the desktop environment or the login screen. During the first system boot after the installation. When the greenboot package is installed and enabled. If the system boot failed, GRUB always displays the boot menu during the boot. To access the boot menu manually, use either of the following options: Repeatedly press Esc during boot. Repeatedly press F8 during boot. Hold Shift during boot. To disable this feature and configure the boot loader menu to display by default, use the following command: (BZ#2059414) Minimal RHEL installation now installs only the s390utils-core package In RHEL 8.4 and later, the s390utils-base package is split into an s390utils-core package and an auxiliary s390utils-base package. As a result, setting the RHEL installation to minimal-environment installs only the necessary s390utils-core package and not the auxiliary s390utils-base package. If you want to use the s390utils-base package with a minimal RHEL installation, you must manually install the package after completing the RHEL installation or explicitly install s390utils-base using a kickstart file. (BZ#1932480) Image builder on-premise now supports uploading images to GCP With this enhancement, you can use image builder CLI to build a gce image, providing credentials for the user or service account that you want to use to upload the images. As a result, image builder creates the image and then uploads the gce image directly to the GCP environment that you specified. ( BZ#2049492 ) Image builder on-premise CLI supports pushing a container image directly to a registry With this enhancement, you can push RHEL for Edge container images directly to a container registry after it has been built, using the image builder CLI. To build the container image: Set up an upload provider and optionally, add credentials. Build the container image, passing the container registry and the repository to composer-cli as arguments. After the image is ready, it is available in the container registry you set up. (JIRA:RHELPLAN-130376) Image builder on-premise users now customize their blueprints during the image creation process With this update, the Edit Blueprint page was removed to unify the user experience in the image builder service and in the image builder app in cockpit-composer . Users can now create their blueprints and add their customization, such as adding packages, and create users, during the image creation process. The versioning of blueprints has also been removed so that blueprints only have one version: the current one. Users have access to older blueprint versions through their already created images. (JIRA:RHELPLAN-122735) 4.2. RHEL for Edge RHEL for Edge now supports the fdo-admin cli utility With this update, you can configure the FDO services directly across all deployment scenarios by using the CLI. Run the following commands to generate the certificates and keys for the services : Note This example takes into consideration that you already installed the fdo-admin-cli RPM package. If you used the source code and compiled it, the correct path is ./target/debug/fdo-admin-tool or ./target/debug/fdo-admin-tool , depending on your build options. As a result, after you install and start the service, it runs with the default settings. (JIRA:RHELPLAN-122776) 4.3. Subscription management The subscription-manager utility displays the current status of actions The subscription-manager utility now displays with progress information while it is processing the current operation. This is helpful when subscription-manager takes more than usual time to complete its operations related to server communication, for example, registration. To revert to the behavior, enter: ( BZ#2092014 ) 4.4. Software management The modulesync command is now available to replace certain workflows in RHEL 9 In RHEL 9, modular packages cannot be installed without modular metadata. Previously, you could use the dnf command to download packages, and then use the createrepo_c command to redistribute those packages. This enhancement introduces the modulesync command to ensure the presence of modular metadata, which ensures package installability. This command downloads RPM packages from modules and creates a repository with modular metadata in a working directory. (BZ#2066646) 4.5. Shells and command-line tools Cronie adds support for a randomized time within a selected range The Cronie utility now supports the ~ (random within range) operator for cronjob execution. As a result, you can start a cronjob on a randomized time within the selected range. ( BZ#2090691 ) ReaR adds new variables for executing commands before and after recovery With this enhancement, ReaR introduces two new variables for easier automation of commands to be executed before and after recovery: PRE_RECOVERY_COMMANDS accepts an array of commands. These commands will be executed before recovery starts. POST_RECOVERY_COMMANDS accepts an array of commands. These commands will be executed after recovery finishes. These variables are an alternative to PRE_RECOVERY_SCRIPT and POST_RECOVERY_SCRIPT with the following differences: The earlier PRE_RECOVERY_SCRIPT and POST_RECOVERY_SCRIPT variables accept a single shell command. To pass multiple commands to these variables, you must separate the commands by semicolons. The new PRE_RECOVERY_COMMANDS and POST_RECOVERY_COMMANDS variables accept arrays of commands, and each element of the array is executed as a separate command. As a result, providing multiple commands to be executed in the rescue system before and after recovery is now easier and less error-prone. For more information, see the default.conf file. ( BZ#2111059 ) A new package: xmlstarlet XMLStarlet is a set of command-line utilities for parsing, transforming, querying, validating, and editing XML files. The new xmlstarlet package provides a simple set of shell commands that you can use in a similar way as you use UNIX commands for plain text files such as grep , sed , awk , diff , patch , join , and other. (BZ#2069689) opencryptoki rebased to version 3.18.0 The opencryptoki package, which is an implementation of the Public-Key Cryptography Standard (PKCS) #11, has been updated to version 3.18.0. Notable improvements include: Default to Federal Information Processing Standards (FIPS) compliant token data format (tokversion = 3.12). Added support for restricting usage of mechanisms and keys with a global policy. Added support for statistics counting of mechanism usage. The ICA/EP11 tokens now support libica library version 4. The p11sak tool enables setting different attributes for public and private keys. The C_GetMechanismList does not return CKR_BUFFER_TOO_SMALL in the EP11 token. openCryptoki supports two different token data formats: the earlier data format, which uses non-FIPS-approved algorithms (such as DES and SHA1) the new data format, which uses FIPS-approved algorithms only. The earlier data format no longer works because the FIPS provider allows the use of only FIPS-approved algorithms. Important To make openCryptoki work on RHEL 9, migrate the tokens to use the new data format before enabling FIPS mode on the system. This is necessary because the earlier data format is still the default in openCryptoki 3.17 . Existing openCryptoki installations that use the earlier token data format will no longer function when the system is changed to FIPS-enabled. You can migrate the tokens to the new data format by using the pkcstok_migrate utility, which is provided with openCryptoki . Note that pkcstok_migrate uses non-FIPS-approved algorithms during the migration. Therefore, use this tool before enabling FIPS mode on the system. For additional information, see Migrating to FIPS compliance - pkcstok_migrate utility . (BZ#2044179) powerpc-utils rebased to version 1.3.10 The powerpc-utils package, which provides various utilities for a PowerPC platform, has been updated to version 1.3.10. Notable improvements include: Added the capability to parsing the Power architecture platform reference (PAPR) information for energy and frequency in the ppc64_cpu tool. Improved the lparstat utility to display enhanced error messages, when the lparstat -E command fails on max config systems. The lparstat command reports logical partition-related information. Fixed reported online memory in legacy format in the lparstat command. Added support for the acc command for changing the quality of service credits (QoS) dynamically for the NX GZIP accelerator. Added improvements to format specifiers in printf() and sprintf() calls. The hcnmgr utility, which provides the HMC tools to hybrid virtual network, includes following enhancements: Added the wicked feature to the Hybrid Network Virtualization HNV FEATURE list. The hcnmgr utility supports wicked hybrid network virtualization (HNV) to use the wicked functions for bonding. hcnmgr maintains an hcnid state for later cleanup. hcnmgr excludes NetworkManager (NM) nmcli code. The NM HNV primary slave setting was fixed. hcnmgr supports the virtual Network Interface Controller (vNIC) as a backup device. Fixed the invalid hexadecimal numbering system message in bootlist . The -l flag included in kpartx utility as -p delimiter value in the bootlist command. Fixes added to sslot utility to prevent memory leak when listing IO slots. Added the DRC type description strings for the latest peripheral component interconnect express (PCIe) slot types in the lsslot utility. Fixed the invalid config address to RTAS in errinjct tool. Added support for non-volatile memory over fabrics (NVMf) devices in the ofpathname utility. The utility provides a mechanism for converting a logical device name to an open firmware device path and the other way round. Added fixes to the non-volatile memory (NVMe) support in asymmetric namespace access (ANA) mode in the ofpathname utility. Installed smt.state file as a configuration file. (BZ#1920964) The Redfish modules are now part of the redhat.rhel_mgmt Ansible collection The redhat.rhel_mgmt Ansible collection now includes the following modules: redfish_info redfish_command redfish_config With that, users can benefit from the management automation, by using the Redfish modules to retrieve server health status, get information about hardware and firmware inventory, perform power management, change BIOS settings, configure Out-Of-Band (OOB) controllers, configure hardware RAID, and perform firmware updates. ( BZ#2112434 ) libvpd rebased to version 2.2.9 The libvpd package, which contains classes for accessing the Vital Product Data (VPD), has been updated to version 2.2.9. Notable improvements include: Fixed database locking Updated libtool utility version information (BZ#2051288) lsvpd rebased to version 1.7.14 The lsvpd package, which provides commands for constituting a hardware inventory system, has been updated to version 1.7.14. With this update, the lsvpd utility prevents corruption of the database file when you run the vpdupdate command. (BZ#2051289) ppc64-diag rebased to version 2.7.8 The ppc64-diag package for platform diagnostics has been updated to version 2.7.8. Notable improvements include: Updated build dependency to use libvpd utility version 2.2.9 or higher Fixed extract_opal_dump error message on unsupported platform Fixed build warning with GCC-8.5 and GCC-11 compilers (BZ#2051286) sysctl introduces identic syntax for arguments as systemd-sysctl The sysctl utility from the procps-ng package, which you can use to modify kernel parameters at runtime, now uses the same syntax for arguments as the systemd-sysctl utility. With this update, sysctl now parses configuration files that contain hyphens ( - ) or globs ( * ) on configuration lines. For more information about the systemd-sysctl syntax, see the sysctl.d(5) man page. ( BZ#2052536 ) Updated systemd-udevd assigns consistent network device names to InfiniBand interfaces Introduced in RHEL 9, the new version of the systemd package contains the updated systemd-udevd device manager. The device manager changes the default names of InfiniBand interfaces to consistent names selected by systemd-udevd . You can define custom naming rules for naming InfiniBand interfaces by following the Renaming IPoIB devices procedure. For more details of the naming scheme, see the systemd.net-naming-scheme(7) man page. ( BZ#2136937 ) 4.6. Infrastructure services chrony now uses DHCPv6 NTP servers The NetworkManager dispatcher script for chrony updates the Network time protocol (NTP) sources passed from Dynamic Host Configuration Protocol (DHCP) options. Since RHEL 9.1, the script uses NTP servers provided by DHCPv6 in addition to DHCPv4. The DHCP option 56 specifies the usage of DHCPv6, the DHCP option 42 is DHCPv4-specific. ( BZ#2047415 ) chrony rebased to version 4.2 The chrony suite has been updated to version 4.2. Notable enhancements over version 4.1 include: The server interleaved mode has been improved to be more reliable and supports multiple clients behind a single address translator (Network Address Translation - NAT). Experimental support for the Network Time Protocol Version 4 (NTPv4) extension field has been added to improve time synchronization stability and precision of estimated errors. You can enable this field, which extends the capabilities of the protocol NTPv4, by using the extfield F323 option. Experimental support for NTP forwarding over the Precision Time Protocol (PTP) has been added to enable full hardware timestamping on Network Interface Cards (NIC) that have timestamping limited to PTP packets. You can enable NTP over PTP by using the ptpport 319 directive. ( BZ#2051441 ) unbound rebased to version 1.16.2 The unbound component has been updated to version 1.16.2. unbound is a validating, recursive, and caching DNS resolver. Notable improvements include: With the ZONEMD Zone Verification with RFC 8976 support, recipients can now verify the zone contents for data integrity and origin authenticity. With unbound , you can now configure persistent TCP connections. The SVCB and HTTPS types and handling according to the Service binding and parameter specification through the DNS draft-ietf-dnsop-svcb-https document were added. unbound takes the default TLS ciphers from crypto policies. You can use a Special-Use Domain home.arpa. according to the RFC8375 . This domain is designated for non-unique use in residential home networks. unbound now supports selective enabling of tcp-upstream queries for stub or forward zones. The default of aggressive-nsec option is now yes . The ratelimit logic was updated. You can use a new rpz-signal-nxdomain-ra option for unsetting the RA flag when a query is blocked by an Unbound response policy zone (RPZ) nxdomain reply. With the basic support for Extended DNS Errors (EDE) according to the RFC8914 , you can benefit from additional error information. ( BZ#2087120 ) The password encryption function is now available in whois The whois package now provides the /usr/bin/mkpasswd binary, which you can use to encrypt a password with the crypt C library interface. ( BZ#2054043 ) frr rebased to version 8.2.2 The frr package for managing dynamic routing stack has been updated to version 8.2.2. Notable changes and enhancements over version 8.0 include: Added Ethernet VPN (EVPN) route type-5 gateway IP Overlay Index. Added Autonomous system border router (ASBR) summarization in the Open-shortest-path-first (OSPFv3) protocol. Improved usage of stub and not-so-stubby-areas (NSSA) in OSPFv3. Added the graceful restart capability in OSPFv2 and OSPFv3. The link bandwidth in the border gateway protocol (BGP) is now encoded according to the IEEE 754 standard. To use the encoding method, run the neighbor PEER disable-link-bw-encoding-ieee command in the existing configuration. Added the long-lived graceful restart capability in BGP. Implemented the extended administrative shutdown communication rfc9003 , and the extended optional parameters length rfc9072 in BGP. ( BZ#2069563 ) TuneD real-time profiles now auto determine initial CPU isolation setup TuneD is a service for monitoring your system and optimizing the performance profile. You can also isolate central processing units (CPUs) using the tuned-profiles-realtime package to give application threads the most execution time possible. Previously, the real-time profiles for systems running the real-time kernel did not load if you did not specify the list of CPUs to isolate in the isolated_cores parameter. With this enhancement, TuneD introduces the calc_isolated_cores built-in function that automatically calculates housekeeping and isolated cores lists, and applies the calculation to the isolated_cores parameter. With the automatic preset, one core from each socket is reserved for housekeeping, and you can start using the real-time profile without any additional steps. If you want to change the preset, customize the isolated_cores parameter by specifying the list of CPUs to isolate. ( BZ#2093847 ) 4.7. Security New packages: keylime RHEL 9.1 introduces Keylime, a tool for attestation of remote systems, which uses the trusted platform module (TPM) technology. With Keylime, you can verify and continuously monitor the integrity of remote systems. You can also specify encrypted payloads that Keylime delivers to the monitored machines, and define automated actions that trigger whenever a system fails the integrity test. See Ensuring system integrity with Keylime in the RHEL 9 Security hardening document for more information. (JIRA:RHELPLAN-92522) New option in OpenSSH supports setting the minimum RSA key length Accidentally using short RSA keys makes the system more vulnerable to attacks. With this update, you can set minimum RSA key lengths for OpenSSH servers and clients. To define the minimum RSA key length, use the new RequiredRSASize option in the /etc/ssh/sshd_config file for OpenSSH servers, and in the /etc/ssh/ssh_config file for OpenSSH clients. ( BZ#2066882 ) crypto-policies enforce 2048-bit RSA key length minimum for OpenSSH by default Using short RSA keys makes the system more vulnerable to attacks. Because OpenSSH now supports limiting minimum RSA key length, the system-wide cryptographic policies enforce the 2048-bit minimum key length for RSA by default. If you encounter OpenSSH failing connections with an Invalid key length error message, start using longer RSA keys. Alternatively, you can relax the restriction by using a custom subpolicy at the expense of security. For example, if the update-crypto-policies --show command reports that the current policy is DEFAULT : Define a custom subpolicy by inserting the min_rsa_size@openssh = 1024 parameter into the /etc/crypto-policies/policies/modules/RSA-OPENSSH-1024.pmod file. Apply the custom subpolicy using the update-crypto-policies --set DEFAULT:RSA-OPENSSH-1024 command. ( BZ#2102774 ) New option in OpenSSL supports SHA-1 for signatures OpenSSL 3.0.0 in RHEL 9 does not support SHA-1 for signature creation and verification by default (SHA-1 key derivation functions (KDF) and hash-based message authentication codes (HMAC) are still supported). However, to support backwards compatibility with RHEL 8 systems that still use SHA-1 for signatures, a new configuration option rh-allow-sha1-signatures is introduced to RHEL 9. This option, if enabled in the alg_section of openssl.cnf , permits the creation and verification of SHA-1 signatures. This option is automatically enabled if the LEGACY system-wide cryptographic policy (not legacy provider) is set. Note that this also affects the installation of RPM packages with SHA-1 signatures, which may require switching to the LEGACY system-wide cryptographic policy. (BZ#2060510, BZ#2055796 ) crypto-policies now support [email protected] This update of the system-wide cryptographic policies adds support for the [email protected] key exchange (KEX) method. The post-quantum sntrup761 algorithm is already available in the OpenSSH suite, and this method provides better security against attacks from quantum computers. To enable [email protected] , create and apply a subpolicy, for example: For more information, see the Customizing system-wide cryptographic policies with subpolicies section in the RHEL 9 Security hardening document. ( BZ#2070604 ) NSS no longer support RSA keys shorter than 1023 bits The update of the Network Security Services (NSS) libraries changes the minimum key size for all RSA operations from 128 to 1023 bits. This means that NSS no longer perform the following functions: Generate RSA keys shorter than 1023 bits. Sign or verify RSA signatures with RSA keys shorter than 1023 bits. Encrypt or decrypt values with RSA key shorter than 1023 bits. ( BZ#2091905 ) SELinux policy confines additional services The selinux-policy packages have been updated, and therefore the following services are now confined by SELinux: ksm nm-priv-helper rhcd stalld systemd-network-generator targetclid wg-quick (BZ#1965013, BZ#1964862, BZ#2020169, BZ#2021131, BZ#2042614, BZ#2053639 , BZ#2111069 ) SELinux supports the self keyword in type transitions SELinux tooling now supports type transition rules with the self keyword in the policy sources. Support for type transitions with the self keyword prepares the SELinux policy for labeling of anonymous inodes. ( BZ#2069718 ) SELinux user-space packages updated SELinux user-space packages libsepol , libselinux , libsemanage , policycoreutils , checkpolicy , and mcstrans were updated to the latest upstream release 3.4. The most notable changes are: Added support for parallel relabeling through the -T option in the setfiles , restorecon , and fixfiles tools. You can either specify the number of process threads in this option or use -T 0 for using the maximum of available processor cores. This reduces the time required for relabeling significantly. Added the new --checksum option, which prints SHA-256 hashes of modules. Added new policy utilities in the libsepol-utils package. ( BZ#2079276 ) SELinux automatic relabeling is now parallel by default Because the newly introduced parallel relabeling option significantly reduces the time required for the SELinux relabeling process on multi-core systems, the automatic relabeling script now contains the -T 0 option in the fixfiles command line. The -T 0 option ensures that the setfiles program uses the maximum of available processor cores for relabeling by default. To use only one process thread for relabeling as in the version of RHEL, override this setting by entering either the fixfiles -T 1 onboot command instead of just fixfiles onboot or the echo "-T 1" > /.autorelabel command instead of touch /.autorelabel . ( BZ#2115242 ) SCAP Security Guide rebased to 0.1.63 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.63. This version provides various enhancements and bug fixes, most notably: New compliance rules for sysctl , grub2 , pam_pwquality , and build time kernel configuration were added. Rules hardening the PAM stack now use authselect as the configuration tool. Note: With this change, the rules hardening the PAM stack are not applied if the PAM stack was edited by other means. ( BZ#2070563 ) Added a maximum size option for Rsyslog error files Using the new action.errorfile.maxsize option, you can specify a maximum number of bytes of the error file for the Rsyslog log processing system. When the error file reaches the specified size, Rsyslog cannot write any additional errors or other data in it. This prevents the error file from filling up the file system and making the host unusable. ( BZ#2064318 ) clevis-luks-askpass is now enabled by default The /lib/systemd/system-preset/90-default.preset file now contains the enable clevis-luks-askpass.path configuration option and the installation of the clevis-systemd sub-package ensures that the clevis-luks-askpass.path unit file is enabled. This enables the Clevis encryption client to unlock also LUKS-encrypted volumes that mount late in the boot process. Before this update, the administrator must use the systemctl enable clevis-luks-askpass.path command to enable Clevis to unlock such volumes. ( BZ#2107078 ) fapolicyd rebased to 1.1.3 The fapolicyd packages have been upgraded to version 1.1.3. Notable improvements and bug fixes include: Rules can now contain the new subject PPID attribute, which matches the parent PID (process ID) of a subject. The OpenSSL library replaced the Libgcrypt library as a cryptographic engine for hash computations. The fagenrules --load command now works correctly. ( BZ#2100041 ) 4.8. Networking The act_ctinfo kernel module has been added This enhancement adds the act_ctinfo kernel module to RHEL. Using the ctinfo action of the tc utility, administrators can copy the conntrack mark or the value of the differentiated services code point (DSCP) of network packets into the socket buffer's mark metadata field. As a result, you can use conditions based on the conntrack mark or the DSCP value to filter traffic. For further details, see the tc-ctinfo(8) man page. (BZ#2027894) cloud-init updates network configuration at every boot on Microsoft Azure Microsoft Azure does not change the instance ID when an administrator updates the network interface configuration while a VM is offline. With this enhancement, the cloud-init service always updates the network configuration when the VM boots to ensure that RHEL on Microsoft Azure uses the latest network settings. As a consequence, if you manually configure settings on interfaces, such as an additional search domain, cloud-init may override them when you reboot the VM. For further details and a workaround, see the cloud-init-22.1-5 updates network config on every boot solution. ( BZ#2144898 ) The PTP driver now supports virtual clocks and time stamping With this enhancement, the Precision Time Protocol (PTP) driver can create virtual PTP Hardware Clocks (PHCs) on top of a free-running PHC by writing to /sys/class/ptp/ptp*/n_vclocks . As a result, users can run multiple domain synchronization with hardware time stamps on one interface. (BZ#2066451) firewalld was rebased to version 1.1.1 The firewalld packages have been upgraded to version 1.1.1. This version provides multiple bug fixes and enhancements over the version: New features: Rich rules support NetFilter-log (NFLOG) target for user-space logging. Note that there is not any NFLOG capable logging daemon in RHEL. However, you can use the tcpdump -i nflog command to collect the logs you need. Support for port forwarding in policies with ingress-zones=HOST and egress-zones={ANY, source based zone } . Other notable changes include: Support for the afp , http3 , jellyfin , netbios-ns , ws-discovery , and ws-discovery-client services Tab-completion and sub-options in Z Shell for the policy option ( BZ#2040689 ) NetworkManager now supports advmss , rto_min , and quickack route attributes With this enhancement, administrators can configure the ipv4.routes setting with the following attributes: rto_min (TIME) - configure the minimum TCP re-transmission timeout in milliseconds when communicating with the route destination quickack (BOOL) - a per-route setting to enable or disable TCP quick ACKs advmss (NUMBER) - advertise maximum segment size (MSS) to the route destination when establishing TCP connections. If unspecified, Linux uses a default value calculated from the maximum transmission unit (MTU) of the first hop device Benefit of implementing the new functionality of ipv4.routes with the mentioned attributes is that there is no need to run the dispatcher script. Note that once you activate a connection with the mentioned route attributes, such changes are set in the kernel. (BZ#2068525) Support for the 802.ad vlan-protocol option in nmstate The nmstate API now supports creating the linux-bridge interfaces using the 802.ad vlan-protocol option. This feature enables the configuration of Service-Tag VLANs. The following example illustrates usage of this functionality in a yaml configuration file. ( BZ#2084474 ) The firewalld service can forward NAT packets originating from the local host to a different host and port You can forward packets sent from the localhost that runs the firewalld service to a different destination port and IP address. The functionality is useful, for example, to forward ports on the loopback device to a container or a virtual machine. Prior to this change, firewalld could only forward ports when it received a packet that originated from another host. For more details and an illustrative configuration, see Using DNAT to forward HTTPS traffic to a different host . ( BZ#2039542 ) NetworkManager now supports migration from ifcfg-rh to key file Users can migrate their existing connection profile files from the ifcfg-rh format to the key file format. This way, all connection profiles will be in one location and in the preferred format. The key file format has the following advantages: Closely resembles the way how NetworkManager expresses network configuration Guarantees compatibility with future RHEL releases Is easier to read Supports all connection profiles To migrate the connections, run: Note that the ifcfg-rh files will work correctly during the RHEL 9 lifetime. However, migrating the configuration to the key file format guarantees compatibility beyond RHEL 9. For more details, see the nmcli(1) , nm-settings-keyfile(5), and nm-settings-ifcfg-rh(5) manual pages. ( BZ#2059608 ) More DHCP and IPv6 auto-configuration attributes have been added to the nmstate API This enhancement adds support for the following attributes to the nmstate API: dhcp-client-id for DHCPv4 connections as described in RFC 2132 and 4361. dhcp-duid for DHCPv6 connections as described in RFC 8415. addr-gen-mode for IPv6 auto-configuration. You can set this attribute to: eui64 as described in RFC 4862 stable-privacy as described in RFC 7217 ( BZ#2082043 ) NetworkManager now clearly indicates that WEP support is not available in RHEL 9 The wpa_supplicant packages in RHEL 9.0 and later no longer contain the deprecated and insecure Wired Equivalent Privacy (WEP) security algorithm. This enhancement updates NetworkManager to reflect these changes. For example, the nmcli device wifi list command now returns WEP access points at the end of the list in gray color, and connecting to a WEP-protected network returns a meaningful error message. For secure encryption, use only wifi networks with Wi-Fi Protected Access 2 (WPA2) and WPA3 authentication. ( BZ#2030997 ) The MPTCP code has been updated The MultiPath TCP (MPTCP) code in the kernel has been updated and upstream Linux 5.19. This update provides a number of bug fixes and enhancements over the version: The FASTCLOSE option has been added to close MPTCP connections without a full three-way handshake. The MP_FAIL option has been added to enable fallback to TCP even after the initial handshake. The monitoring capabilities have been improved by adding additional Management Information Base (MIB) counters. Monitor support for MPTCP listener sockets has been added. Use the ss utility to monitor the sockets. (BZ#2079368) 4.9. Kernel Kernel version in RHEL 9.1 Red Hat Enterprise Linux 9.1 is distributed with the kernel version 5.14.0-162. ( BZ#2125549 ) Memory consumption of the list_lru has been optimized The internal kernel data structure, list_lru , tracks the "Least Recently Used" status of kernel inodes and directory entries for files. Previously, the number of list_lru allocated structures was directly proportional to the number of mount points and the number of present memory cgroups . Both these numbers increased with the number of running containers leading to memory consumption of O(n^2) where n is the number of running containers. This update optimizes the memory consumption of list_lru in the system to O(n) . As a result, sufficient memory is now available for the user applications, especially on the systems with a large number of running containers. (BZ#2013413) BPF rebased to Linux kernel version 5.16 The Berkeley Packet Filter (BPF) facility has been rebased to Linux kernel version 5.16 with multiple bug fixes and enhancements. The most notable changes include: Streamlined internal BPF program sections handling and bpf_program__set_attach_target() API in the libbpf userspace library. The bpf_program__set_attach_target() API sets the BTF based attach targets for BPF based programs. Added support for the BTF_KIND_TAG kind, which allows you to tag declarations. Added support for the bpf_get_branch_snapshot() helper, which enables the tracing program to capture the last branch records (LBR) from the hardware. Added the legacy kprobe events support in the libbpf userspace library that enables kprobe tracepoint events creation through the legacy interface. Added the capability to access hardware timestamps through BPF specific structures with the __sk_buff helper function. Added support for a batched interface for RX buffer allocation in AF_XDP buffer pool, with driver support for i40e and ice . Added the legacy uprobe support in libbpf userspace library to complement recently merged legacy kprobe . Added the bpf_trace_vprintk() as variadic printk helper. Added the libbpf opt-in for stricter BPF program section name handling as part of libbpf 1.0 effort. Added the libbpf support to locate specialized maps, such as perf RB and internally delete BTF type identifiers while creating them. Added the bloomfilter BPF map type to test if an element exists in a set. Added support for kernel module function calls from BPF. Added support for typeless and weak ksym in light skeleton. Added support for the BTF_KIND_DECL_TAG kind. For more information on the full list of BPF features available in the running kernel, use the bpftool feature command. (BZ#2069045) BTF data is now located in the kernel module BPF Type Format (BTF) is the metadata format that encodes the debug information related to BPF program and map. Previously, the BTF data for kernel modules was stored in the kernel-debuginfo package. As a consequence, it was necessary to install the corresponding kernel-debuginfo package in order to use BTF for kernel modules. With this update, the BTF data is now located directly in the kernel module. As a result, you do not need to install any additional packages for BTF to work. (BZ#2097188) The kernel-rt source tree has been updated to RHEL 9.1 tree The kernel-rt sources have been updated to use the latest Red Hat Enterprise Linux kernel source tree. The real-time patch set has also been updated to the latest upstream version, v5.15-rt . These updates provide a number of bug fixes and enhancements. (BZ#2061574) Dynamic preemptive scheduling enabled on ARM and AMD and Intel 64-bit architectures RHEL 9 provides the dynamic scheduling feature on the ARM and AMD and Intel 64-bit architectures. This enhancement enables changing the preemption mode of the kernel at boot or runtime instead of the compile time. The /sys/kernel/debug/sched/preempt file contains the current setting and allows runtime modification. Using the DYNAMIC_PREEMPT option, you can set the preempt= variable at boot time to either none , voluntary or full with voluntary preemption being the default. Using dynamic preemptive handling, you can override the default preemption model to improve scheduling latency. (BZ#2065226) stalld rebased to version 1.17 The stalld program, which provides the stall daemon, is a mechanism to prevent the starvation state of operating system threads in a Linux system. This version monitors the threads for the starvation state. Starvation occurs when a thread is on a CPU run queue for longer than the starvation threshold. This stalld version includes many improvements and bug fixes over the version. The notable change includes the capability to detect runnable dying tasks. When stalld detects a starving thread, the program changes the scheduling class of the thread to the SCHED_DEADLINE policy, which gives the thread a small slice of time for the specified CPU to run the thread. When the timeslice is used, the thread returns to its original scheduling policy and stalld continues to monitor the thread states. ( BZ#2107275 ) The tpm2-tools package has been rebased to tpm2-tools-5.2-1 version The tpm2-tools package has been rebased to version tpm2-tools-5.2-1 . This upgrade provides many significant enhancements and bug fixes. Most notable changes include: Adds support for public-key output at primary object creation using the tpm2_createprimary and tpm2_create tools. Adds support for the tpm2_print tool to print public-key output formats. tpm2_print decodes a Trusted Platform Module (TPM) data structure and prints enclosed elements. Adds support to the tpm2_eventlog tool for reading logs larger than 64 KB. Adds the tpm2_sessionconfig tool to support displaying and configuring session attributes. For more information on notable changes, see the /usr/share/doc/tpm2-tools/Changelog.md file. (BZ#2090748) Intel E800 devices now support iWARP and RoCE protocols With this enhancement, you can now use the enable_iwarp and enable_roce devlink parameters to turn on and off iWARP or RoCE protocol support. With this mandatory feature, you can configure the device with one of the protocols. The Intel E800 devices do not support both protocols simultaneously on the same port. To enable or disable the iWARP protocol for a specific E800 device, first obtain the PCI location of the card: Then enable, or disable, the protocol. You can use use pci/0000:44:00.0 for the first port, and pci/0000:44:00.1 for second port of the card as argument to the devlink command To enable or disable the RoCE protocol for a specific E800 device, obtain the PCI location of the card as shown above. Then use one of the following commands: (BZ#2096127) 4.10. Boot loader GRUB is signed by new keys Due to security reasons, GRUB is now signed by new keys. As a consequence, you need to update the RHEL firmware to version FW1010.30 (or later) or FW1020 to be able to boot the little-endian variant of IBM Power Systems with the Secure Boot feature enabled. (BZ#2074761) Configurable disk access retries when booting a VM on IBM POWER You can now configure how many times the GRUB boot loader retries accessing a remote disk when a logical partition ( lpar ) virtual machine (VM) boots on the IBM POWER architecture. Lowering the number of retries can prevent a slow boot in certain situations. Previously, GRUB retried accessing disks 20 times when disk access failed at boot. This caused problems if you performed a Live Partition Mobility (LPM) migration on an lpar system that connected to slow Storage Area Network (SAN) disks. As a consequence, the boot might have taken very long on the system until the 20 retries finished. With this update, you can now configure and decrease the number of disk access retries using the ofdisk_retries GRUB option. For details, see Configure disk access retries when booting a VM on IBM POWER . As a result, the lpar boot is no longer slow after LPM on POWER, and the lpar system boots without the failed disks. ( BZ#2070725 ) 4.11. File systems and storage Stratis now enables setting the file system size upon creation You can now set the required size when creating a file system. Previously, the automatic default size was 1 TiB. With this enhancement, users can set an arbitrary filesystem size. The lower limit must not go below 512 MiB. ( BZ#1990905 ) Improved overprovision management of Stratis pools With the improvements to the management of thin provisioning, you can now have improved warnings, precise allocation of space for the pool metadata, improved predictability, overall safety, and reliability of thin pool management. A new distinct mode disables overprovisioning. With this enhancement, the user can disable overprovisioning to ensure that a pool contains enough space to support all its file systems, even if these are completely full. ( BZ#2040352 ) Stratis now provides improved individual pool management You can now stop and start stopped individual Stratis pools. Previously, stratisd attempted to start all available pools for all devices it detected. This enhancement provides more flexible management of individual pools within Stratis, better debugging and recovery capabilities. The system no longer requires a reboot to perform recovery and maintenance operations for a single pool. ( BZ#2039960 ) Enabled protocol specific configuration of multipath device paths Previously due to different optimal configurations for the different protocols, it was impossible to set the configuration correctly without setting an option for each individual protocol. With this enhancement, users can now configure multipath device paths based on their path transport protocol. Use the protocol subsection of the overrides section in the /etc/multipath.conf file to correctly configure multipath device paths, based on their protocol. ( BZ#2084365 ) New libnvme feature library Previously, the NVMe storage command line interface utility ( nvme-cli ) included all of the helper functions and definitions. This enhancement brings a new libnvme library to RHEL 9.1. The library includes: Type definitions for NVMe specification structures Enumerations and bit fields Helper functions to construct, dispatch, and decode commands and payloads Utilities to connect, scan, and manage NVMe devices With this update, users do not need to duplicate the code and multiple projects and packages, such as nvme-stas , and can rely on this common library. (BZ#2099619) A new library libnvme is now available With this update, nvme-cli is divided in two different projects: * nvme-cli now only contains the code specific to the nvme tool * libnvme library now contains all type definitions for NVMe specification structures, enumerations, bit fields, helper functions to construct, dispatch, decode commands and payloads, and utilities to connect, scan, and manage NVMe devices. ( BZ#2090121 ) 4.12. High availability and clusters Support for High Availability on Red Hat OpenStack platform You can now configure a high availability cluster on the Red Hat OpenStack platform. In support of this feature, Red Hat provides the following new cluster agents: fence_openstack : fencing agent for HA clusters on OpenStack openstack-info : resource agent to configure the openstack-info cloned resource, which is required for an HA cluster on OpenStack openstack-virtual-ip : resource agent to configure a virtual IP address resource openstack-floating-ip : resource agent to configure a floating IP address resource openstack-cinder-volume : resource agent to configure a block storage resource ( BZ#2121838 ) pcs supports updating multipath SCSI devices without requiring a system restart You can now update multipath SCSI devices with the pcs stonith update-scsi-devices command. This command updates SCSI devices without causing a restart of other cluster resources running on the same node. ( BZ#2024522 ) Support for cluster UUID During cluster setup, the pcs command now generates a UUID for every cluster. Since a cluster name is not a unique cluster identifier, you can use the cluster UUID to identify clusters with the same name when you administer multiple clusters. You can display the current cluster UUID with the pcs cluster config [show] command. You can add a UUID to an existing cluster or regenerate a UUID if it already exists by using the pcs cluster config uuid generate command. ( BZ#2054671 ) New pcs resource config command option to display the pcs commands that re-create configured resources The pcs resource config command now accepts the --output-format=cmd option. Specifying this option displays the pcs commands you can use to re-create configured resources on a different system. ( BZ#2058251 ) New pcs stonith config command option to display the pcs commands that re-create configured fence devices The pcs stonith config command now accepts the --output-format=cmd option. Specifying this option displays the pcs commands you can use to re-create configured fence devices on a different system. ( BZ#2058252 ) Pacemaker rebased to version 2.1.4 The Pacemaker packages have been upgraded to the upstream version of Pacemaker 2.1.4. Notable changes include: The multiple-active resource parameter now accepts a value of stop_unexpected , The multiple-active resource parameter determines recovery behavior when a resource is active on more than one node when it should not be. By default, this situation requires a full restart of the resource, even if the resource is running successfully where it should be. A value of stop_unexpected for this parameter specifies that only unexpected instances of a multiply-active resource are stopped. It is the user's responsibility to verify that the service and its resource agent can function with extra active instances without requiring a full restart. Pacemaker now supports the allow-unhealthy-node resource meta-attribute. When this meta-attribute is set to true , the resource is not forced off a node due to degraded node health. When health resources have this attribute set, the cluster can automatically detect if the node's health recovers and move resources back to it. Users can now specify Access Control Lists (ACLS) for a system group using the pcs acl group command. Pacemaker previously allowed ACLs to be specified for individual users, but it is sometimes simpler and would conform better with local policies to specify ACLs for a system group, and to have them apply to all users in that group. This command was present in earlier releases but had no effect. ( BZ#2072108 ) Samba no longer automatically installed with cluster packages As of this release, installing the packages for the RHEL High Availability Add-On no longer installs the Samba packages automatically. This also allows you to remove the Samba packages without automatically removing the HA packages as well. If your cluster uses Samba resources you must now manually install them. (BZ#1826455) 4.13. Dynamic programming languages, web and database servers The nodejs:18 module stream is now fully supported The nodejs:18 module stream, previously available as a Technology Preview, is fully supported with the release of the RHSA-2022:8832 advisory. The nodejs:18 module stream now provides Node.js 18.12 , which is a Long Term Support (LTS) version. Node.js 18 included in RHEL 9.1 provides numerous new features together with bug and security fixes over Node.js 16 . Notable changes include: The V8 engine has been upgraded to version 10.2. The npm package manager has been upgraded to version 8.19.2. Node.js now provides a new experimental fetch API. Node.js now provides a new experimental node:test module, which facilitates the creation of tests that report results in the Test Anything Protocol (TAP) format. Node.js now prefers IPv6 addresses over IPv4. To install the nodejs:18 module stream, use: (BZ#2083072) A new module stream: php:8.1 RHEL 9.1 adds PHP 8.1 as a new php:8.1 module stream. With PHP 8.1 , you can: Define a custom type that is limited to one of a discrete number of possible values using the Enumerations (Enums) feature Declare a property with the readonly modifier to prevent modification of the property after initialization Use fibers, full-stack, interruptible functions To install the php:8.1 module stream, use: For details regarding PHP usage on RHEL 9, see Using the PHP scripting language . (BZ#2070040) A new module stream: ruby:3.1 RHEL 9.1 introduces Ruby 3.1.2 in a new ruby:3.1 module stream. This version provides a number of performance improvements, bug and security fixes, and new features over Ruby 3.0 distributed with RHEL 9.0. Notable enhancements include: The Interactive Ruby (IRB) utility now provides an autocomplete feature and a documentation dialog A new debug gem, which replaces lib/debug.rb , provides improved performance, and supports remote debugging and multi-process/multi-thread debugging The error_highlight gem now provides a fine-grained error location in the backtrace Values in the hash literal data types and keyword arguments can now be omitted The pin operator ( ^ ) now accepts an expression in pattern matching Parentheses can now be omitted in one-line pattern matching YJIT, a new experimental in-process Just-in-Time (JIT) compiler, is now available on the AMD and Intel 64-bit architectures The TypeProf For IDE utility has been introduced, which is an experimental static type analysis tool for Ruby code in IDEs The following performance improvements have been implemented in Method Based Just-in-Time Compiler (MJIT): For workloads like Rails , the default maximum JIT cache value has increased from 100 to 10000 Code compiled using JIT is no longer canceled when a TracePoint for class events is enabled Other notable changes include: The tracer.rb file has been removed Since version 4.0, the Psych YAML parser uses the safe_load method by default To install the ruby:3.1 module stream, use: (BZ#2063773) httpd rebased to version 2.4.53 The Apache HTTP Server has been updated to version 2.4.53, which provides bug fixes, enhancements, and security fixes over version 2.4.51 distributed with RHEL 9.0. Notable changes in the mod_proxy and mod_proxy_connect modules include: mod_proxy : The length limit of the name of the controller has been increased mod_proxy : You can now selectively configure timeouts for backend and frontend mod_proxy : You can now disable TCP connections redirection by setting the SetEnv proxy-nohalfclose parameter mod_proxy and mod_proxy_connect : It is forbidden to change a status code after sending it to a client In addition, a new ldap function has been added to the expression API, which can help prevent the LDAP injection vulnerability. ( BZ#2079939 ) A new default for the LimitRequestBody directive in httpd configuration To fix CVE-2022-29404 , the default value for the LimitRequestBody directive in the Apache HTTP Server has been changed from 0 (unlimited) to 1 GiB. On systems where the value of LimitRequestBody is not explicitly specified in an httpd configuration file, updating the httpd package sets LimitRequestBody to the default value of 1 GiB. As a consequence, if the total size of the HTTP request body exceeds this 1 GiB default limit, httpd returns the 413 Request Entity Too Large error code. If the new default allowed size of an HTTP request message body is insufficient for your use case, update your httpd configuration files within the respective context (server, per-directory, per-file, or per-location) and set your preferred limit in bytes. For example, to set a new 2 GiB limit, use: Systems already configured to use any explicit value for the LimitRequestBody directive are unaffected by this change. (BZ#2128016) New package: httpd-core Starting with RHEL 9.1, the httpd binary file with all essential files has been moved to the new httpd-core package to limit the Apache HTTP Server's dependencies in scenarios where only the basic httpd functionality is needed, for example, in containers. The httpd package now provides systemd -related files, including mod_systemd , mod_brotli , and documentation. With this change, the httpd package no longer provides the httpd Module Magic Number (MMN) value. Instead, the httpd-core package now provides the httpd-mmn value. As a consequence, fetching httpd-mmn from the httpd package is no longer possible. To obtain the httpd-mmn value of the installed httpd binary, you can use the apxs binary, which is a part of the httpd-devel package. To obtain the httpd-mmn value, use the following command: (BZ#2065677) pcre2 rebased to version 10.40 The pcre2 package, which provides the Perl Compatible Regular Expressions library v2, has been updated to version 10.40. With this update, the use of the \K escape sequence in lookaround assertions is forbidden, in accordance with the respective change in Perl 5.32 . If you rely on the behavior, you can use the PCRE2_EXTRA_ALLOW_LOOKAROUND_BSK option. Note that when this option is set, \K is accepted only inside positive assertions but is ignored in negative assertions. ( BZ#2086494 ) 4.14. Compilers and development tools The updated GCC compiler is now available for RHEL 9.1 The system GCC compiler, version 11.2.1, has been updated to include numerous bug fixes and enhancements available in the upstream GCC. The GNU Compiler Collection (GCC) provides tools for developing applications with the C, C++, and Fortran programming languages. For usage information, see Developing C and C++ applications in RHEL 9 . ( BZ#2063255 ) New GCC Toolset 12 GCC Toolset 12 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The GCC compiler has been updated to version 12.1.1, which provides many bug fixes and enhancements that are available in upstream GCC. The following tools and versions are provided by GCC Toolset 12: Tool Version GCC 12.1.1 GDB 11.2 binutils 2.35 dwz 0.14 annobin 10.76 To install GCC Toolset 12, run the following command as root: To run a tool from GCC Toolset 12: To run a shell session where tool versions from GCC Toolset 12 override system versions of these tools: For more information, see GCC Toolset 12 . (BZ#2077465) GCC Toolset 12: Annobin rebased to version 10.76 In GCC Toolset 12, the Annobin package has been updated to version 10.76. Notable bug fixes and enhancements include: A new command line option for annocheck tells it to avoid using the debuginfod service, if it is unable to find debug information in another way. Using debuginfod provides annocheck with more information, but it can also cause significant slow downs in annocheck's performance if the debuginfod server is unavailable. The Annobin sources can now be built using meson and ninja rather than configure and make if desired. Annocheck now supports binaries built by the Rust 1.18 compiler. Additionally, the following known issue has been reported in the GCC Toolset 12 version of Annobin: Under some circumstances it is possible for a compilation to fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from annobin.so to gcc-annobin.so : Where architecture is replaced with the architecture being used: aarch64 i686 ppc64le s390x x86_64 (BZ#2077438) GCC Toolset 12: binutils rebased to version 2.38 In GCC Toolset 12, the binutils package has been updated to version 2.38. Notable bug fixes and enhancements include: All tools in the binutils package now support options to display or warn about the presence of multibyte characters. The readelf and objdump tools now automatically follow any links to separate debuginfo files by default. This behavior can be disabled by using the --debug-dump=no-follow-links option for readelf or the --dwarf=no-follow-links option for objdump . (BZ#2077445) GCC 12 and later supports _FORTIFY_SOURCE level 3 With this enhancement, users can build applications with -D_FORTIFY_SOURCE=3 in the compiler command line when building with GCC version 12 or later. _FORTIFY_SOURCE level 3 improves coverage of source code fortification, thus improving security for applications built with -D_FORTIFY_SOURCE=3 in the compiler command line. This is supported in GCC versions 12 and later and all Clang in RHEL 9 with the __builtin_dynamic_object_size builtin. ( BZ#2033683 ) DNS stub resolver option now supports no-aaaa option With this enhancement, glibc now recognizes the no-aaaa stub resolver option in /etc/resolv.conf and the RES_OPTIONS environment variable. When this option is active, no AAAA queries will be sent over the network. System administrators can disable AAAA DNS lookups for diagnostic purposes, such as ruling out that the superfluous lookups on IPv4-only networks do not contribute to DNS issues. ( BZ#2096191 ) Added support for IBM Z Series z16 The support is now available for the s390 instruction set with the IBM z16 platform. IBM z16 provides two additional hardware capabilities in glibc that are HWCAP_S390_VXRS_PDE2 and HWCAP_S390_NNPA . As a result, applications can now use these capabilities to deliver optimized libraries and functions. (BZ#2077838) Applications can use the restartable sequence features through the new glibc interfaces To accelerate the sched_getcpu function (especially on aarch64), it is necessary to use the restartable sequences (rseq) kernel feature by default in glibc . To allow applications to continuously use the shared rseq area, glibc now provides the __rseq_offset , __rseq_size and __rseq_flags symbols which were first added in glibc 2.35 upstream version. With this enhancement, the performance of the sched_getcpu function is increased and applications can now use the restartable sequence features through the new glibc interfaces. ( BZ#2085529 ) GCC Toolset 12: GDB rebased to version 11.2 In GCC Toolset 12, the GDB package has been updated to version 11.2. Notable bug fixes and enhancements include: New support for the 64-bit ARM architecture Memory Tagging Extension (MTE). See new commands with the memory-tag prefix. --qualified option for -break-insert and -dprintf-insert . This option looks for an exact match of the user's event location instead of searching in all scopes. For example, break --qualified foo will look for a symbol named foo in the global scope. Without --qualified , GDB will search all scopes for a symbol with that name. --force-condition : Any supplied condition is defined even if it is currently invalid. -break-condition --force : Likewise for the MI command. -file-list-exec-source-files accepts optional REGEXP to limit output. .gdbinit search path includes the config directory. The order is: USDXDG_CONFIG_HOME/gdb/gdbinit USDHOME/.config/gdb/gdbinit USDHOME/.gdbinit Support for ~/.config/gdb/gdbearlyinit or ~/.gdbearlyinit . -eix and -eiex early initialization file options. Terminal user interface (TUI): Support for mouse actions inside terminal user interface (TUI) windows. Key combinations that do not act on the focused window are now passed to GDB. New commands: show print memory-tag-violations set print memory-tag-violations memory-tag show-logical-tag memory-tag with-logical-tag memory-tag show-allocation-tag memory-tag check show startup-quietly and set startup-quietly : A way to specify -q or -quiet in GDB scripts. Only valid in early initialization files. show print type hex and set print type hex : Tells GDB to print sizes or offsets for structure members in hexadecimal instead of decimal. show python ignore-environment and set python ignore-environment : If enabled, GDB's Python interpreter ignores Python environment variables, much like passing -E to the Python executable. Only valid in early initialization files. show python dont-write-bytecode and set python dont-write-bytecode : If off , these commands suppress GDB's Python interpreter from writing bytecode compiled objects of imported modules, much like passing -B to the Python executable. Only valid in early initialization files. Changed commands: break LOCATION if CONDITION : If CONDITION is invalid, GDB refuses to set a breakpoint. The -force-condition option overrides this. CONDITION -force N COND : Same as the command. inferior [ID] : When ID is omitted, this command prints information about the current inferior. Otherwise, unchanged. ptype[ /FLAGS ] TYPE | EXPRESSION : Use the /x flag to use hexadecimal notation when printing sizes and offsets of struct members. Use the /d flag to do the same but using decimal. info sources : Output has been restructured. Python API: Inferior objects contain a read-only connection_num attribute. New gdb.Frame.level() method. New gdb.PendingFrame.level() method. gdb.BreakpoiontEvent emitted instead of gdb.Stop . (BZ#2077494) GDB supports Power 10 PLT instructions GDB now supports Power 10 PLT instructions. With this update, users are able to step into shared library functions and inspect stack backtraces using GDB version 10.2-10 and later. (BZ#1870017) The dyninst packaged rebased to version 12.1 The dyninst package has been rebased to version 12.1. Notable bug fixes and enhancements include: Initial support for glibc-2.35 multiple namespaces Concurrency fixes for DWARF parallel parsing Better support for the CUDA and CDNA2 GPU binaries Better support for IBM POWER Systems (little endian) register access Better support for PIE binaries Corrected parsing for catch blocks Corrected access to 64-bit Arm ( aarch64 ) floating point registers ( BZ#2057675 ) A new fileset /etc/profile.d/debuginfod.* Added new fileset for activating organizational debuginfod services. To get a system-wide debuginfod client activation you must add the URL to /etc/debuginfod/FOO.urls file. ( BZ#2088774 ) Rust Toolset rebased to version 1.62.1 Rust Toolset has been updated to version 1.62.1. Notable changes include: Destructuring assignment allows patterns to assign to existing variables in the left-hand side of an assignment. For example, a tuple assignment can swap to variables: (a, b) = (b, a); Inline assembly is now supported on 64-bit x86 and 64-bit ARM using the core::arch::asm! macro. See more details in the "Inline assembly" chapter of the reference, /usr/share/doc/rust/html/reference/inline-assembly.html (online at https://doc.rust-lang.org/reference/inline-assembly.html ). Enums can now derive the Default trait with an explicitly annotated #[default] variant. Mutex , CondVar , and RwLock now use a custom futex -based implementation rather than pthreads, with new optimizations made possible by Rust language guarantees. Rust now supports custom exit codes from main , including user-defined types that implement the newly-stabilized Termination trait. Cargo supports more control over dependency features. The dep: prefix can refer to an optional dependency without exposing that as a feature, and a ? only enables a dependency feature if that dependency is enabled elsewhere, like package-name?/feature-name . Cargo has a new cargo add subcommand for adding dependencies to Cargo.toml . For more details, please see the series of upstream release announcements: Announcing Rust 1.59.0 Announcing Rust 1.60.0 Announcing Rust 1.61.0 Announcing Rust 1.62.0 Announcing Rust 1.62.1 (BZ#2075337) LLVM Toolset rebased to version 14.0.6 LLVM Toolset has been rebased to version 14.0.6. Notable changes include: On 64-bit x86, support for AVX512-FP16 instructions has been added. Support for the Armv9-A, Armv9.1-A and Armv9.2-A architectures has been added. On PowerPC, added the __ibm128 type to represent IBM double-double format, also available as __attribute__((mode(IF))) . clang changes: if consteval for C++2b is now implemented. On 64-bit x86, support for AVX512-FP16 instructions has been added. Completed support of OpenCL C 3.0 and C++ for OpenCL 2021 at experimental state. The -E -P preprocessor output now always omits blank lines, matching GCC behavior. Previously, up to 8 consecutive blank lines could appear in the output. Support -Wdeclaration-after-statement with C99 and later standards, and not just C89, matching GCC's behavior. A notable use case is supporting style guides that forbid mixing declarations and code, but want to move to newer C standards. For more information, see the LLVM Toolset and Clang upstream release notes. (BZ#2061041) Go Toolset rebased to version 1.18.2 Go Toolset has been rebased to version 1.18.2. Notable changes include: The introduction of generics while maintaining backwards compatibility with earlier versions of Go. A new fuzzing library. New debug / buildinfo and net / netip packages. The go get tool no longer builds or installs packages. Now, it only handles dependencies in go.mod . If the main module's go.mod file specifies go 1.17 or higher, the go mod download command used without any additional arguments only downloads source code for the explicitly required modules in the main module's go.mod file. To also download source code for transitive dependencies, use the go mod download all command. The go mod vendor subcommand now supports a -o option to set the output directory. The go mod tidy command now retains additional checksums in the go.sum file for modules whose source code is required to verify that only one module in the build list provides each imported package. This change is not conditioned on the Go version in the main module's go.mod file. (BZ#2075169) A new module stream: maven:3.8 RHEL 9.1 introduces Maven 3.8 as a new module stream. To install the maven:3.8 module stream, use: (BZ#2083112) .NET version 7.0 is available Red Hat Enterprise Linux 9.1 is distributed with .NET version 7.0. Notable improvements include: Support for IBM Power ( ppc64le ) For more information, see Release Notes for .NET 7.0 RPM packages and Release Notes for .NET 7.0 containers . (BZ#2112027) 4.15. Identity Management SSSD now supports memory caching for SID requests With this enhancement, SSSD now supports memory caching for SID requests, which are GID and UID lookups by SID and vice versa. Memory caching results in improved performance, for example, when copying large amounts of files to or from a Samba server. (JIRA:RHELPLAN-123369) The ipaservicedelegationtarget and ipaservicedelegationrule Ansible modules are now available You can now use the ipaservicedelegationtarget and ipaservicedelegationrule ansible-freeipa modules to, for example, configure a web console client to allow an Identity Management (IdM) user that has authenticated with a smart card to do the following: Use sudo on the RHEL host on which the web console service is running without being asked to authenticate again. Access a remote host using SSH and access services on the host without being asked to authenticate again. The ipaservicedelegationtarget and ipaservicedelegationrule modules utilize the Kerberos S4U2proxy feature, also known as constrained delegation. IdM traditionally uses this feature to allow the web server framework to obtain an LDAP service ticket on the user's behalf. The IdM-AD trust system uses the feature to obtain a cifs principal. (JIRA:RHELPLAN-117109) SSSD support for anonymous PKINIT for FAST With this enhancement, SSSD now supports anonymous PKINIT for Flexible Authentication via Secure Tunneling (FAST), also called Kerberos armoring in Active Directory. Until now, to use FAST, a Kerberos keytab was needed to request the required credentials. You can now use anonymous PKINIT to create this credential cache to establish the FAST session. To enable anonymous PKINIT, perform the following steps: Set krb5_fast_use_anonymous_pkinit to true in the [domain] section of the sssd.conf file. Restart SSSD. In an IdM environment, you can verify that anonymous PKINIT was used to establish the FAST session by logging in as the IdM user. A cache file with the FAST ticket is created and the Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS indicates that anonymous PKINIT was used: (JIRA:RHELPLAN-123368) IdM now supports Random Serial Numbers With this update, Identity Management (IdM) now includes dogtagpki 11.2.0 , which allows you to use Random Serial Numbers version 3 (RSNv3). You can enable RSNv3 by using the --random-serial-numbers option when running ipa-server-install or ipa-ca-install . With RSNv3 enabled, IdM generates fully random serial numbers for certificates and requests in PKI without range management. Using RSNv3, you can avoid range management in large IdM installations and prevent common collisions when reinstalling IdM. Important RSNv3 is supported only for new IdM installations. If enabled, it is required to use RSNv3 on all PKI services. ( BZ#747959 ) IdM now supports a limit on the number of LDAP binds allowed after a user password has expired With this enhancement, you can set the number of LDAP binds allowed when the password of an Identity Management (IdM) user has expired: -1 IdM grants the user unlimited LDAP binds before the user must reset the password. This is the default value, which matches the behavior. 0 This value disables all LDAP binds once a password is expired. In effect, the users must reset their password immediately. 1-MAXINT The value entered allows exactly that many binds post-expiration. The value can be set in the global password policy and in group policies. Note that the count is stored per server. In order for a user to reset their own password they need to bind with their current, expired password. If the user has exhausted all post-expiration binds, then the password must be administratively reset. ( BZ#2091988 ) New ipasmartcard_server and ipasmartcard_client roles With this update, the ansible-freeipa package provides Ansible roles to configure Identity Management (IdM) servers and clients for smart card authentication. The ipasmartcard_server and ipasmartcard_client roles replace the ipa-advise scripts to automate and simplify the integration. The same inventory and naming scheme are used as in the other ansible-freeipa roles. ( BZ#2076567 ) IdM now supports configuring an AD Trust with Windows Server 2022 With this enhancement, you can establish a cross-forest trust between Identity Management (IdM) domains and Active Directory forests that use Domain Controllers running Windows Server 2022. ( BZ#2122716 ) The ipa-dnskeysyncd and ipa-ods-exporter debug messages are no longer logged to /var/log/messages by default Previously, ipa-dnskeysyncd , the service that is responsible for the LDAP-to-OpenDNSSEC synchronization, and ipa-ods-exporter , the Identity Management (IdM) OpenDNSSEC exporter service, logged all debug messages to /var/log/messages by default. As a consequence, log files grew substantially. With this enhancement, you can configure the log level by setting debug=True in the /etc/ipa/dns.conf file. For more information, refer to default.conf(5) , the man page for the IdM configuration file. ( BZ#2083218 ) samba rebased to version 4.16.1 The samba packages have been upgraded to upstream version 4.16.1, which provides bug fixes and enhancements over the version: By default, the smbd process automatically starts the new samba-dcerpcd process on demand to serve Distributed Computing Environment / Remote Procedure Calls (DCERPC). Note that Samba 4.16 and later always requires samba-dcerpcd to use DCERPC. If you disable the rpc start on demand helpers setting in the [global] section in the /etc/samba/smb.conf file, you must create a systemd service unit to run samba-dcerpcd in standalone mode. The Cluster Trivial Database (CTDB) recovery master role has been renamed to leader . As a result, the following ctdb sub-commands have been renamed: recmaster to leader setrecmasterrole to setleaderrole The CTDB recovery lock configuration has been renamed to cluster lock . CTDB now uses leader broadcasts and an associated timeout to determine if an election is required. Note that the server message block version 1 (SMB1) protocol is deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Note that Red Hat does not support downgrading tdb database files. After updating Samba, verify the /etc/samba/smb.conf file using the testparm utility. For further information about notable changes, read the upstream release notes before updating. ( BZ#2077487 ) SSSD now supports direct integration with Windows Server 2022 With this enhancement, you can use SSSD to directly integrate your RHEL system with Active Directory forests that use Domain Controllers running Windows Server 2022. ( BZ#2070793 ) Improved SSSD multi-threaded performance Previously, SSSD serialized parallel requests from multi-threaded applications, such as Red Hat Directory Server and Identity Management. This update fixes all SSSD client libraries, such as nss and pam , so they do not serialize requests, therefore allowing requests from multiple threads to be executed in parallel for better performance. To enable the behavior of serialization, set the environment variable SSS_LOCKFREE to NO . (BZ#1978119) Directory Server now supports canceling the Auto Membership plug-in task. Previously, the Auto Membership plug-in task could generate high CPU usage on the server if Directory Server has complex configuration (large groups, complex rules and interaction with other plugins). With this enhancement, you can cancel the Auto Membership plug-in task. As a result, performance issues no longer occur. ( BZ#2052527 ) Directory Server now supports recursive delete operations when using ldapdelete With this enhancement, Directory Server now supports the Tree Delete Control [1.2.840.113556.1.4.805] OpenLDAP control. As a result, you can use the ldapdelete utility to recursively delete subentries of a parent entry. ( BZ#2057063 ) You can now set basic replication options during the Directory Server installation With this enhancement, you can configure basic replication options like authentication credentials and changelog trimming during an instance installation using an .inf file. ( BZ#2057066 ) Directory Server now supports instance creation by a non-root user Previously, non-root users were not able to create Directory Server instances. With this enhancement, a non-root user can use the dscreate ds-root subcommand to configure an environment where dscreate , dsctl , dsconf commands are used as usual to create and administer Directory Server instances. ( BZ#1872451 ) pki packages renamed to idm-pki The following pki packages are now renamed to idm-pki to better distinguish between IDM packages and Red Hat Certificate System ones: idm-pki-tools idm-pki-acme idm-pki-base idm-pki-java idm-pki-ca idm-pki-kra idm-pki-server python3-idm-pki ( BZ#2139877 ) 4.16. Graphics infrastructures Wayland is now enabled with Matrox GPUs The desktop session now enables the Wayland back end with Matrox GPUs. In releases, Wayland was disabled with Matrox GPUs due to performance and other limitations. These problems have now been fixed. You can still switch the desktop session from Wayland back to Xorg. For more information, see Overview of GNOME environments . ( BZ#2097308 ) 12th generation Intel Core GPUs are now supported This release adds support for several integrated GPUs for the 12th Gen Intel Core CPUs. This includes Intel UHD Graphics and Intel Xe integrated GPUs found with the following CPU models: Intel Core i3 12100T through Intel Core i9 12900KS Intel Pentium Gold G7400 and G7400T Intel Celeron G6900 and G6900T Intel Core i5-12450HX through Intel Core i9-12950HX Intel Core i3-1220P through Intel Core i7-1280P (JIRA:RHELPLAN-135601) Support for new AMD GPUs This release adds support for several AMD Radeon RX 6000 Series GPUs and integrated graphics of the AMD Ryzen 6000 Series CPUs. The following AMD Radeon RX 6000 Series GPU models are now supported: AMD Radeon RX 6400 AMD Radeon RX 6500 XT AMD Radeon RX 6300M AMD Radeon RX 6500M AMD Ryzen 6000 Series includes integrated GPUs found with the following CPU models: AMD Ryzen 5 6600U AMD Ryzen 5 6600H AMD Ryzen 5 6600HS AMD Ryzen 7 6800U AMD Ryzen 7 6800H AMD Ryzen 7 6800HS AMD Ryzen 9 6900HS AMD Ryzen 9 6900HX AMD Ryzen 9 6980HS AMD Ryzen 9 6980HX (JIRA:RHELPLAN-135602) 4.17. The web console Update progress page in the web console now supports an automatic restart option The update progress page now has a Reboot after completion switch. This reboots the system automatically after installing the updates. ( BZ#2056786 ) 4.18. Red Hat Enterprise Linux system roles The network RHEL system role supports network configuration using the nmstate API With this update, the network RHEL system role supports network configuration through the nmstate API. Users can now directly apply the configuration of the required network state to a network interface instead of creating connection profiles. The feature also allows partial configuration of a network. As a result, the following benefits exist: decreased network configuration complexity reliable way to apply the network state changes no need to track the entire network configuration ( BZ#2072385 ) Users can create connections with IPoIB capability using the network RHEL system role The infiniband connection type of the network RHEL system role now supports the Internet Protocol over Infiniband (IPoIB) capability. To enable this feature, define a value to the p_key option of infiniband . Note that if you specify p_key , the interface_name option of the network_connections variable must be left unset. The implementation of the network RHEL system role did not properly validate the p_key value and the interface_name option for the infiniband connection type. Therefore, the IPoIB functionality never worked before. For more information, see a README file in the /usr/share/doc/rhel-system-roles/network/ directory. ( BZ#2086965 ) HA Cluster RHEL system role now supports SBD fencing and configuration of Corosync settings The HA Cluster system role now supports the following features: SBD fencing Fencing is a crucial part of HA cluster configuration. SBD provides a means for nodes to reliably self-terminate when fencing is required. SBD fencing can be particularly useful in environments where traditional fencing mechanisms are not possible. It is now possible to configure SBD fencing with the HA Cluster system role. Corosync settings The HA Cluster system role now supports the configuration of Corosync settings, such as transport, compression, encryption, links, totem, and quorum. These settings are required to match cluster configuration with customers' needs and environment when the default settings are not suitable. ( BZ#2065337 , BZ#2070452 , BZ#2079626 , BZ#2098212 , BZ#2120709 , BZ#2120712 ) The network RHEL role now configures network settings for routing rules Previously, you could route the packet based on the destination address field in the packet, but you could not define the source routing and other policy routing rules. With this enhancement, network RHEL role supports routing rules so that the users have control over the packet transmission or route selection. ( BZ#2079622 ) The new :replaced configuration enables firewall system role to reset the firewall settings to default System administrators who manage different sets of machines, where each machine has different pre-existing firewall settings, can now use the : replaced configuration in the firewall role to ensure that all machines have the same firewall configuration settings. The : replaced configuration can erase all the existing firewall settings and replace them with consistent settings. ( BZ#2043010 ) New option in the postfix RHEL system role for overwriting configuration If you manage a group of systems which have inconsistent postfix configurations, you may want to make the configuration consistent on all of them. With this enhancement, you can specify the : replaced option within the postfix_conf dictionary to remove any existing configuration and apply the desired configuration on top of a clean postfix installation. As a result, you can erase any existing postfix configuration and ensure consistency on all the systems being managed. ( BZ#2065383 ) Enhanced microsoft.sql.server RHEL system role The following new variables are now available for the microsoft.sql.server RHEL system role: Variables with the mssql_ha_ prefix to control configuring a high availability cluster. The mssql_tls_remote_src variable to search for mssql_tls_cert and mssql_tls_private_key values on managed nodes. If you keep the default false setting, the role searches for these files on the control node. The mssql_manage_firewall variable to manage firewall ports automatically. If this variable is set to false , you must enable firewall ports manually. The mssql_pre_input_sql_file and mssql_post_input_sql_file variables to control whether you want to run the SQL scripts before the role execution or after it. These new variables replace the former mssql_input_sql_file variable, which did not allow you to influence the time of SQL script execution. ( BZ#2066337 ) The logging RHEL system role supports options startmsg.regex and endmsg.regex in files inputs With this enhancement, you can now filter log messages coming from files by using regular expressions. Options startmsg_regex and endmsg_regex are now included in the files' input. The startmsg_regex represents the regular expression that matches the start part of a message, and the endmsg_regex represents the regular expression that matches the last part of a message. As a result, you can now filter messages based upon properties such as date-time, priority, and severity. ( BZ#2112145 ) The sshd RHEL system role verifies the include directive for the drop-in directory The sshd RHEL system role on RHEL 9 manages only a file in the drop-in directory, but previously did not verify that the directory is included from the main sshd_config file. With this update, the role verifies that sshd_config contains the include directive for the drop-in directory. As a result, the role more reliably applies the provided configuration. ( BZ#2052081 ) The sshd RHEL system role can be managed through /etc/ssh/sshd_config The sshd RHEL system role applied to a RHEL 9 managed node places the SSHD configuration in a drop-in directory ( /etc/ssh/sshd_config.d/00-ansible_system_role.conf by default). Previously, any changes to the /etc/ssh/sshd_config file overwrote the default values in 00-ansible_system_role.conf . With this update, you can manage SSHD by using /etc/ssh/sshd_config instead of 00-ansible_system_role.conf while preserving the system default values in 00-ansible_system_role.conf . ( BZ#2052086 ) The metrics role consistently uses "Ansible_managed" comment in its managed configuration files With this update, the metrics role inserts the "Ansible managed" comment to the configuration files, using the Ansible standard ansible_managed variable. The comment indicates that the configuration files should not be directly edited because the metrics role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2065392 ) The storage RHEL system role now supports managing the pool members The storage RHEL system role can now add or remove disks from existing LVM pools without removing the pool first. To increase the pool capacity, the storage RHEL system role can add new disks to the pool and free currently allocated disks in the pool for another use. ( BZ#2072742 ) Support for thinly provisioned volumes is now available in the storage RHEL system role The storage RHEL system role can now create and manage thinly provisioned LVM logical volumes (LVs). Thin provisioned LVs are allocated as they are written, allowing better flexibility when creating volumes as physical storage provided for thin provisioned LVs can be increased later as the need arises. LVM thin provisioning also allows creating more efficient snapshots because the data blocks common to a thin LV and any of its snapshots are shared. ( BZ#2072745 ) Better support for cached volumes is available in the storage RHEL system role The storage RHEL system role can now attach cache to existing LVM logical volumes. LVM cache can be used to improve performance of slower logical volumes by temporarily storing subsets of an LV's data on a smaller, faster device, for example an SSD. This enhances the previously added support for creating cached volumes by allowing adding (attaching) a cache to an existing, previously uncached volume. ( BZ#2072746 ) The logging RHEL system role now supports template , severity and facility options The logging RHEL system role now features new useful severity and facility options to the files inputs as well as a new template option to the files and forwards outputs. Use the template option to specify the traditional time format by using the parameter traditional , the syslog protocol 23 format by using the parameter syslog , and the modern style format by using the parameter modern . As a result, you can now use the logging role to filter by the severity and facility as well as to specify the output format by template. ( BZ#2075119 ) RHEL system roles now available also in playbooks with fact gathering disabled Ansible fact gathering might be disabled in your environment for performance or other reasons. Previously, it was not possible to use RHEL system roles in such configurations. With this update, the system detects the ANSIBLE_GATHERING=explicit parameter in your configuration and gather_facts: false parameter in your playbooks, and use the setup: module to gather only the facts required by the given role, if not available from the fact cache. Note If you have disabled Ansible fact gathering due to performance, you can enable Ansible fact caching instead, which does not cause a performance hit of retrieving them from source. ( BZ#2078989 ) The storage role now has less verbosity by default The storage role output is now less verbose by default. With this update, users can increase the verbosity of storage role output to only produce debugging output if they are using Ansible verbosity level 1 or above. ( BZ#2079627 ) The firewall RHEL system role does not require the state parameter when configuring masquerade or icmp_block_inversion When configuring custom firewall zones, variables masquerade and icmp_block_inversion are boolean settings. A value of true implies state: present and a value of false implies state: absent . Therefore, the state parameter is not required when configuring masquerade or icmp_block_inversion . ( BZ#2093423 ) You can now add, update, or remove services using absent and present states in the firewall RHEL system role With this enhancement, you can use the present state to add ports, modules, protocols, services, and destination addresses, or use the absent state to remove them. Note that to use the absent and present states in the firewall RHEL system role, set the permanent option to true . With the permanent option set to true , the state settings apply until changed, and remain unaffected by role reloads. ( BZ#2100292 ) The firewall system role can add or remove an interface to the zone using PCI device ID Using the PCI device ID, the firewall system role can now assign or remove a network interface to or from a zone. Previously, if only the PCI device ID was known instead of the interface name, users had to first identify the corresponding interface name to use the firewall system role. With this update, the firewall system role can now use the PCI device ID to manage a network interface in a zone. ( BZ#2100942 ) The firewall RHEL system role can provide Ansible facts With this enhancement, you can now gather the firewall RHEL system role's Ansible facts from all of your systems by including the firewall: variable in the playbook with no arguments. To gather a more detailed version of the Ansible facts, use the detailed: true argument, for example: ( BZ#2115154 ) Added setting of seuser and selevel to the selinux RHEL system role Sometimes, it is necessary to set seuser and selevel parameters when setting SELinux context file system mappings. With this update, you can use the seuser and selevel optional arguments in selinux_fcontext to specify SELinux user and level in the SELinux context file system mappings. ( BZ#2115157 ) New cockpit system role variable for setting a custom listening port The cockpit system role introduces the cockpit_port variable that allows you to set a custom listening port other than the default 9090 port. Note that if you decide to set a custom listening port, you will also need to adjust your SELinux policy to allow the web console to listen on that port. ( BZ#2115152 ) The metrics role can export postfix performance data You can now use the new metrics_from_postfix boolean variable in the metrics role for recording and detailed performance analysis. With this enhancement, setting the variable enables the pmdapostfix metrics agent on the system, making statistics about postfix available. ( BZ#2051737 ) The postfix role consistently uses "Ansible_managed" comment in its managed configuration files The postfix role generates the /etc/postfix/main.cf configuration file. With this update, the postfix role inserts the "Ansible managed" comment to the configuration files, using the Ansible standard ansible_managed variable. The comment indicates that the configuration files should not be directly edited because the postfix role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2065393 ) The nbde-client RHEL system role supports static IP addresses In versions of RHEL, restarting a system with a static IP address and configured with the nbde_client RHEL system role changed the system's IP address. With this update, systems with static IP addresses are supported by the nbde_client role, and their IP addresses do not change after a reboot. Note that by default, the nbde_client role uses DHCP when booting, and switches to the configured static IP after the system is booted. (BZ#2070462) 4.19. Virtualization RHEL web console now features RHEL as an option for the Download an OS VM workflow With this enhancement, the RHEL web console now supports the installation of RHEL virtual machines (VMs) using the default Download an OS workflow. As a result, you can download and install the RHEL OS as a VM directly within the web console. (JIRA:RHELPLAN-121982) Improved KVM architectural compliance With this update, the architectural compliance of the KVM hypervisor has now been enhanced and made stricter. As a result, the hypervisor is now better prepared to address future changes to Linux-based and other operating systems. (JIRA:RHELPLAN-117713) ap-check is now available in RHEL 9 The mdevctl tool now provides a new ap-check support utility. You can use mdevctl to persistently configure cryptographic adapters and domains that are allowed for pass-through usage into virtual machines as well as the matrix and vfio-ap devices. With mdevctl , you do not have to reconfigure these adapters, domains, and devices after every IPL. In addition, mdevctl prevents the distributor from inventing other ways to reconfigure them. When invoking mdevctl commands for vfio-ap devices, the new ap-check support utility is invoked as part of the mdevctl command to perform additional validity checks against vfio-ap device configurations. In addition, the chzdev tool now provides the ability to manage the system-wide Adjunct Processor (AP) mask settings, which determine what AP resources are available for vfio-ap devices. When used, chzdev makes it possible to persist these settings by generating an associated udev rule. Using lszdev , you can can now also query the system-wide AP mask settings. (BZ#1870699) open-vm-tools rebased to 12.0.5 The open-vm-tools packages have been upgraded to version 12.0.5, which introduces a number of bug fixes and new features. Most notably, support has been added for the Salt Minion tool to be managed through guest OS variables. (BZ#2061193) Selected VMs on IBM Z can now boot with kernel command lines longer than 896 bytes Previously, booting a virtual machine (VM) on a RHEL 9 IBM Z host always failed if the kernel command line of the VM was longer than 896 bytes. With this update, the QEMU emulator can handle kernel command lines longer than 896 bytes. As a result, you can now use QEMU direct kernel boot for VMs with very long kernel command lines, if the VM kernel supports it. Specifically, to use a command line longer than 896 bytes, the VM must use Linux kernel version 5.16-rc1 or later. (BZ#2044218) The Secure Execution feature on IBM Z now supports remote attestation The Secure Execution feature on the IBM Z architecture now supports remote attestation. The pvattest utility can create a remote attestation request to verify the integrity of a guest that has Secure Execution enabled. Additionally, it is now possible to inject interrupts to guests with Secure Execution through the use of GISA. (BZ#2001936, BZ#2044300) VM memory preallocation using multiple threads You can now define multiple CPU threads for virtual machine (VM) memory allocation in the domain XML configuration, for example as follows: This ensures that more than one thread is used for allocating memory pages when starting a VM. As a result, VMs with multiple allocation threads configured start significantly faster, especially if the VMs has large amounts of RAM assigned and backed by hugepages. (BZ#2064194) RHEL 9 guests now support SEV-SNP On virtual machines (VMs) that use RHEL 9 as a guest operating system, you can now use AMD Secure Encrypted Virtualization (SEV) with the Secure Nested Paging (SNP) feature. Among other benefits, SNP enhances SEV by improving its memory integrity protection, which helps prevent hypervisor-based attacks such as data replay or memory re-mapping. Note that for SEV-SNP to work on a RHEL 9 VM, the host running the VM must support SEV-SNP as well. (BZ#2169738) 4.20. RHEL in cloud environments New SSH module for cloud-init With this update, an SSH module has been added to the cloud-init utility, which automatically generates host keys during instance creation. Note that with this change, the default cloud-init configuration has been updated. Therefore, if you had a local modification, make sure the /etc/cloud/cloud.cfg contains "ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']" line. Otherwise, cloud-init creates an image which fails to start the sshd service. If this occurs, do the following to work around the problem: Make sure the /etc/cloud/cloud.cfg file contains the following line: Check whether /etc/ssh/ssh_host_* files exist in the instance. If the /etc/ssh/ssh_host_* files do not exist, use the following command to generate host keys: Restart the sshd service: (BZ#2115791) 4.21. Containers The Container Tools packages have been updated The Container Tools packages which contain the Podman, Buildah, Skopeo, crun, and runc tools are now available. This update provides a list of bug fixes and enhancements over the version. Notable changes include: The podman pod create command now supports setting the CPU and memory limits. You can set a limit for all containers in the pod, while individual containers within the pod can have their own limits. The podman pod clone command creates a copy of an existing pod. The podman play kube command now supports the security context settings using the BlockDevice and CharDevice volumes. Pods created by the podman play kube can now be managed by systemd unit files using a podman-kube@<service>.service (for example systemctl --user start podman-play-kube@USD(systemd-escape my.yaml).service ). The podman push and podman push manifest commands now support the sigstore signatures. The Podman networks can now be isolated by using the podman network --opt isolate command. Podman has been upgraded to version 4.2, for further information about notable changes, see the upstream release notes . (JIRA:RHELPLAN-118462) GitLab Runner is now available on RHEL using Podman Beginning with GitLab Runner 15.1, you can use Podman as the container runtime in the GitLab Runner Docker Executor. For more details, see GitLab's Release Note . (JIRA:RHELPLAN-101140) Podman now supports the --health-on-failure option The podman run and podman create commands now support the --health-on-failure option to determine the actions to be performed when the status of a container becomes unhealthy. The --health-on-failure option supports four actions: none : Take no action, this is the default action. kill : Kill the container. restart : Restart the container. stop : Stop the container. Note Do not combine the restart action with the --restart option. When running inside of a systemd unit, consider using the kill or stop action instead to make use of systemd's restart policy. ( BZ#2097708 ) Netavark network stack is now available The Netavark stack is a network configuration tool for containers. In RHEL 9, the Netavark stack is fully supported and enabled by default. This network stack has the following capabilities: Configuration of container networks using the JSON configuration file Creating, managing, and removing network interfaces, including bridge and MACVLAN interfaces Configuring firewall settings, such as network address translation (NAT) and port mapping rules IPv4 and IPv6 Improved capability for containers in multiple networks Container DNS resolution using the aardvark-dns project Note You have to use the same version of Netavark stack and the aardvark-dns authoritative DNS server. (JIRA:RHELPLAN-132023) New package: catatonit in the CRB repository A new catatonit package is now available in the CodeReady Linux Builder (CRB) repository. The catatonit package is used as a minimal init program for containers and can be included within the application container image. Note that packages included in the CodeReady Linux Builder repository are unsupported. Note that since RHEL 9.0, the podman-catonit package is available in the AppStream repository. The podman-catatonit package is used only by the Podman tool. (BZ#2074193) | [
"[[customizations.filesystem]] mountpoint = \"/boot\" size = \"20 GiB\"",
"grub2-editenv - unset menu_auto_hide",
"mkdir keys for i in \"diun\" \"manufacturer\" \"device_ca\" \"owner\"; do fdo-admin-tool generate-key-and-cert USDi; done ls keys device_ca_cert.pem device_ca_key.der diun_cert.pem diun_key.der manufacturer_cert.pem manufacturer_key.der owner_cert.pem owner_key.der",
"subscription-manager config --rhsm.progress_messages=0",
"echo 'key_exchange = +SNTRUP' > /etc/crypto-policies/policies/modules/SNTRUP.pmod update-crypto-policies --set DEFAULT:SNTRUP",
"--- interfaces: - name: br0 type: linux-bridge state: up bridge: options: vlan-protocol: 802.1ad port: - name: eth1 vlan: mode: trunk trunk-tags: - id: 500",
"nmcli connection migrate",
"lspci | awk '/E810/ {print USD1}' 44:00.0 44:00.1 USD",
"devlink dev param set pci/0000:44:00.0 name enable_iwarp value true cmode runtime devlink dev param set pci/0000:44:00.0 name enable_iwarp value false cmode runtime",
"devlink dev param set pci/0000:44:00.0 name enable_roce value true cmode runtime devlink dev param set pci/0000:44:00.0 name enable_roce value false cmode runtime",
"dnf module install nodejs:18",
"dnf module install php:8.1",
"dnf module install ruby:3.1",
"LimitRequestBody 2147483648",
"apxs -q HTTPD_MMN 20120211",
"dnf install gcc-toolset-12",
"scl enable gcc-toolset-12 tool",
"scl enable gcc-toolset-12 bash",
"cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory",
"cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so",
"dnf module install maven:3.8",
"klist /var/lib/sss/db/fast_ccache_IPA.VM Ticket cache: FILE:/var/lib/sss/db/fast_ccache_IPA.VM Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/10/2022 10:33:45 03/10/2022 10:43:45 krbtgt/[email protected]",
"vars: firewall: detailed: true",
"<memoryBacking> <allocation threads='8'/> </memoryBacking>",
"ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']",
"cloud-init single --name cc_ssh",
"systemctl restart sshd"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/New-features |
Chapter 21. KIE Server capabilities and extensions | Chapter 21. KIE Server capabilities and extensions The capabilities in KIE Server are determined by plug-in extensions that you can enable, disable, or further extend to meet your business needs. KIE Server supports the following default capabilities and extensions: Table 21.1. KIE Server capabilities and extensions Capability name Extension name Description KieServer KieServer Provides the core capabilities of KIE Server, such as creating and disposing KIE containers on your server instance BRM Drools Provides the Business Rule Management (BRM) capabilities, such as inserting facts and executing business rules BPM jBPM Provides the Business Process Management (BPM) capabilities, such as managing user tasks and executing business processes BPM-UI jBPM-UI Provides additional user-interface capabilities related to business processes, such as rendering XML forms and SVG images in process diagrams CaseMgmt Case-Mgmt Provides the case management capabilities for business processes, such as managing case definitions and milestones BRP OptaPlanner Provides the Business Resource Planning (BRP) capabilities, such as implementing solvers DMN DMN Provides the Decision Model and Notation (DMN) capabilities, such as managing DMN data types and executing DMN models Swagger Swagger Provides the Swagger web-interface capabilities for interacting with the KIE Server REST API To view the supported extensions of a running KIE Server instance, send a GET request to the following REST API endpoint and review the XML or JSON server response: Base URL for GET request for KIE Server information Example JSON response with KIE Server information { "type": "SUCCESS", "msg": "Kie Server info", "result": { "kie-server-info": { "id": "test-kie-server", "version": "7.67.0.20190818-050814", "name": "test-kie-server", "location": "http://localhost:8080/kie-server/services/rest/server", "capabilities": [ "KieServer", "BRM", "BPM", "CaseMgmt", "BPM-UI", "BRP", "DMN", "Swagger" ], "messages": [ { "severity": "INFO", "timestamp": { "java.util.Date": 1566169865791 }, "content": [ "Server KieServerInfo{serverId='test-kie-server', version='7.67.0.20190818-050814', name='test-kie-server', location='http:/localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Sun Aug 18 23:11:05 UTC 2019" ] } ], "mode": "DEVELOPMENT" } } } To enable or disable KIE Server extensions, configure the related *.server.ext.disabled KIE Server system property. For example, to disable the BRM capability, set the system property org.drools.server.ext.disabled=true . For all KIE Server system properties, see Chapter 20, KIE Server system properties . By default, KIE Server extensions are exposed through REST or JMS data transports and use predefined client APIs. You can extend existing KIE Server capabilities with additional REST endpoints, extend supported transport methods beyond REST or JMS, or extend functionality in the KIE Server client. This flexibility in KIE Server functionality enables you to adapt your KIE Server instances to your business needs, instead of adapting your business needs to the default KIE Server capabilities. Important If you extend KIE Server functionality, Red Hat does not support the custom code that you use as part of your custom implementations and extensions. 21.1. Extending an existing KIE Server capability with a custom REST API endpoint The KIE Server REST API enables you to interact with your KIE containers and business assets (such as business rules, processes, and solvers) in Red Hat Process Automation Manager without using the Business Central user interface. The available REST endpoints are determined by the capabilities enabled in your KIE Server system properties (for example, org.drools.server.ext.disabled=false for the BRM capability). You can extend an existing KIE Server capability with a custom REST API endpoint to further adapt the KIE Server REST API to your business needs. As an example, this procedure extends the Drools KIE Server extension (for the BRM capability) with the following custom REST API endpoint: Example custom REST API endpoint This example custom endpoint accepts a list of facts to be inserted into the working memory of the decision engine, automatically executes all rules, and retrieves all objects from the KIE session in the specified KIE container. Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-rest-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> </dependencies> Implement the org.kie.server.services.api.KieServerApplicationComponentsService interface in a Java class in your project, as shown in the following example: Sample implementation of the KieServerApplicationComponentsService interface public class CusomtDroolsKieServerApplicationComponentsService implements KieServerApplicationComponentsService { 1 private static final String OWNER_EXTENSION = "Drools"; 2 public Collection<Object> getAppComponents(String extension, SupportedTransports type, Object... services) { 3 // Do not accept calls from extensions other than the owner extension: if ( !OWNER_EXTENSION.equals(extension) ) { return Collections.emptyList(); } RulesExecutionService rulesExecutionService = null; 4 KieServerRegistry context = null; for( Object object : services ) { if( RulesExecutionService.class.isAssignableFrom(object.getClass()) ) { rulesExecutionService = (RulesExecutionService) object; continue; } else if( KieServerRegistry.class.isAssignableFrom(object.getClass()) ) { context = (KieServerRegistry) object; continue; } } List<Object> components = new ArrayList<Object>(1); if( SupportedTransports.REST.equals(type) ) { components.add(new CustomResource(rulesExecutionService, context)); 5 } return components; } } 1 Delivers REST endpoints to the KIE Server infrastructure that is deployed when the application starts. 2 Specifies the extension that you are extending, such as the Drools extension in this example. 3 Returns all resources that the REST container must deploy. Each extension that is enabled in your KIE Server instance calls the getAppComponents method, so the if ( !OWNER_EXTENSION.equals(extension) ) call returns an empty collection for any extensions other than the specified OWNER_EXTENSION extension. 4 Lists the services from the specified extension that you want to use, such as the RulesExecutionService and KieServerRegistry services from the Drools extension in this example. 5 Specifies the transport type for the extension, either REST or JMS ( REST in this example), and the CustomResource class that returns the resource as part of the components list. Implement the CustomResource class that KIE Server can use to provide the additional functionality for the new REST resource, as shown in the following example: Sample implementation of the CustomResource class // Custom base endpoint: @Path("server/containers/instances/{containerId}/ksession") public class CustomResource { private static final Logger logger = LoggerFactory.getLogger(CustomResource.class); private KieCommands commandsFactory = KieServices.Factory.get().getCommands(); private RulesExecutionService rulesExecutionService; private KieServerRegistry registry; public CustomResource() { } public CustomResource(RulesExecutionService rulesExecutionService, KieServerRegistry registry) { this.rulesExecutionService = rulesExecutionService; this.registry = registry; } // Supported HTTP method, path parameters, and data formats: @POST @Path("/{ksessionId}") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response insertFireReturn(@Context HttpHeaders headers, @PathParam("containerId") String id, @PathParam("ksessionId") String ksessionId, String cmdPayload) { Variant v = getVariant(headers); String contentType = getContentType(headers); // Marshalling behavior and supported actions: MarshallingFormat format = MarshallingFormat.fromType(contentType); if (format == null) { format = MarshallingFormat.valueOf(contentType); } try { KieContainerInstance kci = registry.getContainer(id); Marshaller marshaller = kci.getMarshaller(format); List<?> listOfFacts = marshaller.unmarshall(cmdPayload, List.class); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, ksessionId); for (Object fact : listOfFacts) { commands.add(commandsFactory.newInsert(fact, fact.toString())); } commands.add(commandsFactory.newFireAllRules()); commands.add(commandsFactory.newGetObjects()); ExecutionResults results = rulesExecutionService.call(kci, executionCommand); String result = marshaller.marshall(results); logger.debug("Returning OK response with content '{}'", result); return createResponse(result, v, Response.Status.OK); } catch (Exception e) { // If marshalling fails, return the `call-container` response to maintain backward compatibility: String response = "Execution failed with error : " + e.getMessage(); logger.debug("Returning Failure response with content '{}'", response); return createResponse(response, v, Response.Status.INTERNAL_SERVER_ERROR); } } } In this example, the CustomResource class for the custom endpoint specifies the following data and behavior: Uses the base endpoint server/containers/instances/{containerId}/ksession Uses POST HTTP method Expects the following data to be given in REST requests: The containerId as a path argument The ksessionId as a path argument List of facts as a message payload Supports all KIE Server data formats: XML (JAXB, XStream) JSON Unmarshals the payload into a List<?> collection and, for each item in the list, creates an InsertCommand instance followed by FireAllRules and GetObject commands. Adds all commands to the BatchExecutionCommand instance that calls to the decision engine. To make the new endpoint discoverable for KIE Server, create a META-INF/services/org.kie.server.services.api.KieServerApplicationComponentsService file in your Maven project and add the fully qualified class name of the KieServerApplicationComponentsService implementation class within the file. For this example, the file contains the single line org.kie.server.ext.drools.rest.CusomtDroolsKieServerApplicationComponentsService . Build your project and copy the resulting JAR file into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . Start KIE Server and deploy the built project to the running KIE Server. You can deploy the project using either the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, you can start interacting with your new REST endpoint. For this example, you can use the following information to invoke the new endpoint: Example request URL: http://localhost:8080/kie-server/services/rest/server/containers/instances/demo/ksession/defaultKieSession HTTP method: POST HTTP headers: Content-Type: application/json Accept: application/json Example message payload: [ { "org.jbpm.test.Person": { "name": "john", "age": 25 } }, { "org.jbpm.test.Person": { "name": "mary", "age": 22 } } ] Example server response: 200 (success) Example server log output: 21.2. Extending KIE Server to use a custom data transport By default, KIE Server extensions are exposed through REST or JMS data transports. You can extend KIE Server to support a custom data transport to adapt KIE Server transport protocols to your business needs. As an example, this procedure adds a custom data transport to KIE Server that uses the Drools extension and that is based on Apache MINA, an open-source Java network-application framework. The example custom MINA transport exchanges string-based data that relies on existing marshalling operations and supports only JSON format. Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> <dependency> <groupId>org.apache.mina</groupId> <artifactId>mina-core</artifactId> <version>2.1.3</version> </dependency> </dependencies> Implement the org.kie.server.services.api.KieServerExtension interface in a Java class in your project, as shown in the following example: Sample implementation of the KieServerExtension interface public class MinaDroolsKieServerExtension implements KieServerExtension { private static final Logger logger = LoggerFactory.getLogger(MinaDroolsKieServerExtension.class); public static final String EXTENSION_NAME = "Drools-Mina"; private static final Boolean disabled = Boolean.parseBoolean(System.getProperty("org.kie.server.drools-mina.ext.disabled", "false")); private static final String MINA_HOST = System.getProperty("org.kie.server.drools-mina.ext.port", "localhost"); private static final int MINA_PORT = Integer.parseInt(System.getProperty("org.kie.server.drools-mina.ext.port", "9123")); // Taken from dependency on the `Drools` extension: private KieContainerCommandService batchCommandService; // Specific to MINA: private IoAcceptor acceptor; public boolean isActive() { return disabled == false; } public void init(KieServerImpl kieServer, KieServerRegistry registry) { KieServerExtension droolsExtension = registry.getServerExtension("Drools"); if (droolsExtension == null) { logger.warn("No Drools extension available, quitting..."); return; } List<Object> droolsServices = droolsExtension.getServices(); for( Object object : droolsServices ) { // If the given service is null (not configured), continue to the service: if (object == null) { continue; } if( KieContainerCommandService.class.isAssignableFrom(object.getClass()) ) { batchCommandService = (KieContainerCommandService) object; continue; } } if (batchCommandService != null) { acceptor = new NioSocketAcceptor(); acceptor.getFilterChain().addLast( "codec", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( "UTF-8" )))); acceptor.setHandler( new TextBasedIoHandlerAdapter(batchCommandService) ); acceptor.getSessionConfig().setReadBufferSize( 2048 ); acceptor.getSessionConfig().setIdleTime( IdleStatus.BOTH_IDLE, 10 ); try { acceptor.bind( new InetSocketAddress(MINA_HOST, MINA_PORT) ); logger.info("{} -- Mina server started at {} and port {}", toString(), MINA_HOST, MINA_PORT); } catch (IOException e) { logger.error("Unable to start Mina acceptor due to {}", e.getMessage(), e); } } } public void destroy(KieServerImpl kieServer, KieServerRegistry registry) { if (acceptor != null) { acceptor.dispose(); acceptor = null; } logger.info("{} -- Mina server stopped", toString()); } public void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public List<Object> getAppComponents(SupportedTransports type) { // Nothing for supported transports (REST or JMS) return Collections.emptyList(); } public <T> T getAppComponents(Class<T> serviceType) { return null; } public String getImplementedCapability() { return "BRM-Mina"; } public List<Object> getServices() { return Collections.emptyList(); } public String getExtensionName() { return EXTENSION_NAME; } public Integer getStartOrder() { return 20; } @Override public String toString() { return EXTENSION_NAME + " KIE Server extension"; } } The KieServerExtension interface is the main extension interface that KIE Server can use to provide the additional functionality for the new MINA transport. The interface consists of the following components: Overview of the KieServerExtension interface public interface KieServerExtension { boolean isActive(); void init(KieServerImpl kieServer, KieServerRegistry registry); void destroy(KieServerImpl kieServer, KieServerRegistry registry); void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); List<Object> getAppComponents(SupportedTransports type); <T> T getAppComponents(Class<T> serviceType); String getImplementedCapability(); 1 List<Object> getServices(); String getExtensionName(); 2 Integer getStartOrder(); 3 } 1 Specifies the capability that is covered by this extension. The capability must be unique within KIE Server. 2 Defines a human-readable name for the extension. 3 Determines when the specified extension should be started. For extensions that have dependencies on other extensions, this setting must not conflict with the parent setting. For example, in this case, this custom extension depends on the Drools extension, which has StartOrder set to 0 , so this custom add-on extension must be greater than 0 (set to 20 in the sample implementation). In the MinaDroolsKieServerExtension sample implementation of this interface, the init method is the main element for collecting services from the Drools extension and for bootstrapping the MINA server. All other methods in the KieServerExtension interface can remain with the standard implementation to fulfill interface requirements. The TextBasedIoHandlerAdapter class is the handler on the MINA server that reacts to incoming requests. Implement the TextBasedIoHandlerAdapter handler for the MINA server, as shown in the following example: Sample implementation of the TextBasedIoHandlerAdapter handler public class TextBasedIoHandlerAdapter extends IoHandlerAdapter { private static final Logger logger = LoggerFactory.getLogger(TextBasedIoHandlerAdapter.class); private KieContainerCommandService batchCommandService; public TextBasedIoHandlerAdapter(KieContainerCommandService batchCommandService) { this.batchCommandService = batchCommandService; } @Override public void messageReceived( IoSession session, Object message ) throws Exception { String completeMessage = message.toString(); logger.debug("Received message '{}'", completeMessage); if( completeMessage.trim().equalsIgnoreCase("quit") || completeMessage.trim().equalsIgnoreCase("exit") ) { session.close(false); return; } String[] elements = completeMessage.split("\\|"); logger.debug("Container id {}", elements[0]); try { ServiceResponse<String> result = batchCommandService.callContainer(elements[0], elements[1], MarshallingFormat.JSON, null); if (result.getType().equals(ServiceResponse.ResponseType.SUCCESS)) { session.write(result.getResult()); logger.debug("Successful message written with content '{}'", result.getResult()); } else { session.write(result.getMsg()); logger.debug("Failure message written with content '{}'", result.getMsg()); } } catch (Exception e) { } } } In this example, the handler class receives text messages and executes them in the Drools service. Consider the following handler requirements and behavior when you use the TextBasedIoHandlerAdapter handler implementation: Anything that you submit to the handler must be a single line because each incoming transport request is a single line. You must pass a KIE container ID in this single line so that the handler expects the format containerID|payload . You can set a response in the way that it is produced by the marshaller. The response can be multiple lines. The handler supports a stream mode that enables you to send commands without disconnecting from a KIE Server session. To end a KIE Server session in stream mode, send either an exit or quit command to the server. To make the new data transport discoverable for KIE Server, create a META-INF/services/org.kie.server.services.api.KieServerExtension file in your Maven project and add the fully qualified class name of the KieServerExtension implementation class within the file. For this example, the file contains the single line org.kie.server.ext.mina.MinaDroolsKieServerExtension . Build your project and copy the resulting JAR file and the mina-core-2.0.9.jar file (which the extension depends on in this example) into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . Start the KIE Server and deploy the built project to the running KIE Server. You can deploy the project using either the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, you can view the status of the new data transport in your KIE Server log and start using your new data transport: New data transport in the server log For this example, you can use Telnet to interact with the new MINA-based data transport in KIE Server: Starting Telnet and connecting to KIE Server on port 9123 in a command terminal telnet 127.0.0.1 9123 Example interactions with KIE Server in a command terminal Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. # Request body: demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"john","age":25}}}},{"fire-all-rules":""}]} # Server response: { "results" : [ { "key" : "", "value" : 1 } ], "facts" : [ ] } demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"mary","age":22}}}},{"fire-all-rules":""}]} { "results" : [ { "key" : "", "value" : 1 } ], "facts" : [ ] } demo|{"lookup":"defaultKieSession","commands":[{"insert":{"object":{"org.jbpm.test.Person":{"name":"james","age":25}}}},{"fire-all-rules":""}]} { "results" : [ { "key" : "", "value" : 1 } ], "facts" : [ ] } exit Connection closed by foreign host. Example server log output 21.3. Extending the KIE Server client with a custom client API KIE Server uses predefined client APIs that you can interact with to use KIE Server services. You can extend the KIE Server client with a custom client API to adapt KIE Server services to your business needs. As an example, this procedure adds a custom client API to KIE Server to accommodate a custom data transport (configured previously for this scenario) that is based on Apache MINA, an open-source Java network-application framework. Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> </dependencies> Implement the relevant ServicesClient interface in a Java class in your project, as shown in the following example: Sample RulesMinaServicesClient interface public interface RulesMinaServicesClient extends RuleServicesClient { } A specific interface is required because you must register client implementations based on the interface, and you can have only one implementation for a given interface. For this example, the custom MINA-based data transport uses the Drools extension, so this example RulesMinaServicesClient interface extends the existing RuleServicesClient client API from the Drools extension. Implement the RulesMinaServicesClient interface that KIE Server can use to provide the additional client functionality for the new MINA transport, as shown in the following example: Sample implementation of the RulesMinaServicesClient interface public class RulesMinaServicesClientImpl implements RulesMinaServicesClient { private String host; private Integer port; private Marshaller marshaller; public RulesMinaServicesClientImpl(KieServicesConfiguration configuration, ClassLoader classloader) { String[] serverDetails = configuration.getServerUrl().split(":"); this.host = serverDetails[0]; this.port = Integer.parseInt(serverDetails[1]); this.marshaller = MarshallerFactory.getMarshaller(configuration.getExtraJaxbClasses(), MarshallingFormat.JSON, classloader); } public ServiceResponse<String> executeCommands(String id, String payload) { try { String response = sendReceive(id, payload); if (response.startsWith("{")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException("Unable to send request to KIE Server", e); } } public ServiceResponse<String> executeCommands(String id, Command<?> cmd) { try { String response = sendReceive(id, marshaller.marshall(cmd)); if (response.startsWith("{")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException("Unable to send request to KIE Server", e); } } protected String sendReceive(String containerId, String content) throws Exception { // Flatten the content to be single line: content = content.replaceAll("\\n", ""); Socket minaSocket = null; PrintWriter out = null; BufferedReader in = null; StringBuffer data = new StringBuffer(); try { minaSocket = new Socket(host, port); out = new PrintWriter(minaSocket.getOutputStream(), true); in = new BufferedReader(new InputStreamReader(minaSocket.getInputStream())); // Prepare and send data: out.println(containerId + "|" + content); // Wait for the first line: data.append(in.readLine()); // Continue as long as data is available: while (in.ready()) { data.append(in.readLine()); } return data.toString(); } finally { out.close(); in.close(); minaSocket.close(); } } } This example implementation specifies the following data and behavior: Uses socket-based communication for simplicity Relies on default configurations from the KIE Server client and uses ServerUrl for providing the host and port of the MINA server Specifies JSON as the marshalling format Requires received messages to be JSON objects that start with an open bracket { Uses direct socket communication with a blocking API while waiting for the first line of the response and then reads all lines that are available Does not use stream mode and therefore disconnects the KIE Server session after invoking a command Implement the org.kie.server.client.helper.KieServicesClientBuilder interface in a Java class in your project, as shown in the following example: Sample implementation of the KieServicesClientBuilder interface public class MinaClientBuilderImpl implements KieServicesClientBuilder { 1 public String getImplementedCapability() { 2 return "BRM-Mina"; } public Map<Class<?>, Object> build(KieServicesConfiguration configuration, ClassLoader classLoader) { 3 Map<Class<?>, Object> services = new HashMap<Class<?>, Object>(); services.put(RulesMinaServicesClient.class, new RulesMinaServicesClientImpl(configuration, classLoader)); return services; } } 1 Enables you to provide additional client APIs to the generic KIE Server client infrastructure 2 Defines the KIE Server capability (extension) that the client uses 3 Provides a map of the client implementations, where the key is the interface and the value is the fully initialized implementation To make the new client API discoverable for the KIE Server client, create a META-INF/services/org.kie.server.client.helper.KieServicesClientBuilder file in your Maven project and add the fully qualified class name of the KieServicesClientBuilder implementation class within the file. For this example, the file contains the single line org.kie.server.ext.mina.client.MinaClientBuilderImpl . Build your project and copy the resulting JAR file into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . Start KIE Server and deploy the built project to the running KIE Server. You can deploy the project using either the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, you can start interacting with your new KIE Server client. You use your new client in the same way as the standard KIE Server client, by creating the client configuration and client instance, retrieving the service client by type, and invoking client methods. For this example, you can create a RulesMinaServiceClient client instance and invoke operations on KIE Server through the MINA transport: Sample implementation to create the RulesMinaServiceClient client protected RulesMinaServicesClient buildClient() { KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration("localhost:9123", null, null); List<String> capabilities = new ArrayList<String>(); // Explicitly add capabilities (the MINA client does not respond to `get-server-info` requests): capabilities.add("BRM-Mina"); configuration.setCapabilities(capabilities); configuration.setMarshallingFormat(MarshallingFormat.JSON); configuration.addJaxbClasses(extraClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(configuration); RulesMinaServicesClient rulesClient = kieServicesClient.getServicesClient(RulesMinaServicesClient.class); return rulesClient; } Sample configuration to invoke operations on KIE Server through the MINA transport RulesMinaServicesClient rulesClient = buildClient(); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, "defaultKieSession"); Person person = new Person(); person.setName("mary"); commands.add(commandsFactory.newInsert(person, "person")); commands.add(commandsFactory.newFireAllRules("fired")); ServiceResponse<String> response = rulesClient.executeCommands(containerId, executionCommand); Assert.assertNotNull(response); Assert.assertEquals(ResponseType.SUCCESS, response.getType()); String data = response.getResult(); Marshaller marshaller = MarshallerFactory.getMarshaller(extraClasses, MarshallingFormat.JSON, this.getClass().getClassLoader()); ExecutionResultImpl results = marshaller.unmarshall(data, ExecutionResultImpl.class); Assert.assertNotNull(results); Object personResult = results.getValue("person"); Assert.assertTrue(personResult instanceof Person); Assert.assertEquals("mary", ((Person) personResult).getName()); Assert.assertEquals("JBoss Community", ((Person) personResult).getAddress()); Assert.assertEquals(true, ((Person) personResult).isRegistered()); | [
"http://SERVER:PORT/kie-server/services/rest/server",
"{ \"type\": \"SUCCESS\", \"msg\": \"Kie Server info\", \"result\": { \"kie-server-info\": { \"id\": \"test-kie-server\", \"version\": \"7.67.0.20190818-050814\", \"name\": \"test-kie-server\", \"location\": \"http://localhost:8080/kie-server/services/rest/server\", \"capabilities\": [ \"KieServer\", \"BRM\", \"BPM\", \"CaseMgmt\", \"BPM-UI\", \"BRP\", \"DMN\", \"Swagger\" ], \"messages\": [ { \"severity\": \"INFO\", \"timestamp\": { \"java.util.Date\": 1566169865791 }, \"content\": [ \"Server KieServerInfo{serverId='test-kie-server', version='7.67.0.20190818-050814', name='test-kie-server', location='http:/localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]', messages=null', mode=DEVELOPMENT}started successfully at Sun Aug 18 23:11:05 UTC 2019\" ] } ], \"mode\": \"DEVELOPMENT\" } } }",
"/server/containers/instances/{containerId}/ksession/{ksessionId}",
"<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-rest-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> </dependencies>",
"public class CusomtDroolsKieServerApplicationComponentsService implements KieServerApplicationComponentsService { 1 private static final String OWNER_EXTENSION = \"Drools\"; 2 public Collection<Object> getAppComponents(String extension, SupportedTransports type, Object... services) { 3 // Do not accept calls from extensions other than the owner extension: if ( !OWNER_EXTENSION.equals(extension) ) { return Collections.emptyList(); } RulesExecutionService rulesExecutionService = null; 4 KieServerRegistry context = null; for( Object object : services ) { if( RulesExecutionService.class.isAssignableFrom(object.getClass()) ) { rulesExecutionService = (RulesExecutionService) object; continue; } else if( KieServerRegistry.class.isAssignableFrom(object.getClass()) ) { context = (KieServerRegistry) object; continue; } } List<Object> components = new ArrayList<Object>(1); if( SupportedTransports.REST.equals(type) ) { components.add(new CustomResource(rulesExecutionService, context)); 5 } return components; } }",
"// Custom base endpoint: @Path(\"server/containers/instances/{containerId}/ksession\") public class CustomResource { private static final Logger logger = LoggerFactory.getLogger(CustomResource.class); private KieCommands commandsFactory = KieServices.Factory.get().getCommands(); private RulesExecutionService rulesExecutionService; private KieServerRegistry registry; public CustomResource() { } public CustomResource(RulesExecutionService rulesExecutionService, KieServerRegistry registry) { this.rulesExecutionService = rulesExecutionService; this.registry = registry; } // Supported HTTP method, path parameters, and data formats: @POST @Path(\"/{ksessionId}\") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response insertFireReturn(@Context HttpHeaders headers, @PathParam(\"containerId\") String id, @PathParam(\"ksessionId\") String ksessionId, String cmdPayload) { Variant v = getVariant(headers); String contentType = getContentType(headers); // Marshalling behavior and supported actions: MarshallingFormat format = MarshallingFormat.fromType(contentType); if (format == null) { format = MarshallingFormat.valueOf(contentType); } try { KieContainerInstance kci = registry.getContainer(id); Marshaller marshaller = kci.getMarshaller(format); List<?> listOfFacts = marshaller.unmarshall(cmdPayload, List.class); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, ksessionId); for (Object fact : listOfFacts) { commands.add(commandsFactory.newInsert(fact, fact.toString())); } commands.add(commandsFactory.newFireAllRules()); commands.add(commandsFactory.newGetObjects()); ExecutionResults results = rulesExecutionService.call(kci, executionCommand); String result = marshaller.marshall(results); logger.debug(\"Returning OK response with content '{}'\", result); return createResponse(result, v, Response.Status.OK); } catch (Exception e) { // If marshalling fails, return the `call-container` response to maintain backward compatibility: String response = \"Execution failed with error : \" + e.getMessage(); logger.debug(\"Returning Failure response with content '{}'\", response); return createResponse(response, v, Response.Status.INTERNAL_SERVER_ERROR); } } }",
"[ { \"org.jbpm.test.Person\": { \"name\": \"john\", \"age\": 25 } }, { \"org.jbpm.test.Person\": { \"name\": \"mary\", \"age\": 22 } } ]",
"13:37:20,347 INFO [stdout] (default task-24) Hello mary 13:37:20,348 INFO [stdout] (default task-24) Hello john",
"<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-internal</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> </dependency> <dependency> <groupId>org.apache.mina</groupId> <artifactId>mina-core</artifactId> <version>2.1.3</version> </dependency> </dependencies>",
"public class MinaDroolsKieServerExtension implements KieServerExtension { private static final Logger logger = LoggerFactory.getLogger(MinaDroolsKieServerExtension.class); public static final String EXTENSION_NAME = \"Drools-Mina\"; private static final Boolean disabled = Boolean.parseBoolean(System.getProperty(\"org.kie.server.drools-mina.ext.disabled\", \"false\")); private static final String MINA_HOST = System.getProperty(\"org.kie.server.drools-mina.ext.port\", \"localhost\"); private static final int MINA_PORT = Integer.parseInt(System.getProperty(\"org.kie.server.drools-mina.ext.port\", \"9123\")); // Taken from dependency on the `Drools` extension: private KieContainerCommandService batchCommandService; // Specific to MINA: private IoAcceptor acceptor; public boolean isActive() { return disabled == false; } public void init(KieServerImpl kieServer, KieServerRegistry registry) { KieServerExtension droolsExtension = registry.getServerExtension(\"Drools\"); if (droolsExtension == null) { logger.warn(\"No Drools extension available, quitting...\"); return; } List<Object> droolsServices = droolsExtension.getServices(); for( Object object : droolsServices ) { // If the given service is null (not configured), continue to the next service: if (object == null) { continue; } if( KieContainerCommandService.class.isAssignableFrom(object.getClass()) ) { batchCommandService = (KieContainerCommandService) object; continue; } } if (batchCommandService != null) { acceptor = new NioSocketAcceptor(); acceptor.getFilterChain().addLast( \"codec\", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( \"UTF-8\" )))); acceptor.setHandler( new TextBasedIoHandlerAdapter(batchCommandService) ); acceptor.getSessionConfig().setReadBufferSize( 2048 ); acceptor.getSessionConfig().setIdleTime( IdleStatus.BOTH_IDLE, 10 ); try { acceptor.bind( new InetSocketAddress(MINA_HOST, MINA_PORT) ); logger.info(\"{} -- Mina server started at {} and port {}\", toString(), MINA_HOST, MINA_PORT); } catch (IOException e) { logger.error(\"Unable to start Mina acceptor due to {}\", e.getMessage(), e); } } } public void destroy(KieServerImpl kieServer, KieServerRegistry registry) { if (acceptor != null) { acceptor.dispose(); acceptor = null; } logger.info(\"{} -- Mina server stopped\", toString()); } public void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters) { // Empty, already handled by the `Drools` extension } public List<Object> getAppComponents(SupportedTransports type) { // Nothing for supported transports (REST or JMS) return Collections.emptyList(); } public <T> T getAppComponents(Class<T> serviceType) { return null; } public String getImplementedCapability() { return \"BRM-Mina\"; } public List<Object> getServices() { return Collections.emptyList(); } public String getExtensionName() { return EXTENSION_NAME; } public Integer getStartOrder() { return 20; } @Override public String toString() { return EXTENSION_NAME + \" KIE Server extension\"; } }",
"public interface KieServerExtension { boolean isActive(); void init(KieServerImpl kieServer, KieServerRegistry registry); void destroy(KieServerImpl kieServer, KieServerRegistry registry); void createContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); void disposeContainer(String id, KieContainerInstance kieContainerInstance, Map<String, Object> parameters); List<Object> getAppComponents(SupportedTransports type); <T> T getAppComponents(Class<T> serviceType); String getImplementedCapability(); 1 List<Object> getServices(); String getExtensionName(); 2 Integer getStartOrder(); 3 }",
"public class TextBasedIoHandlerAdapter extends IoHandlerAdapter { private static final Logger logger = LoggerFactory.getLogger(TextBasedIoHandlerAdapter.class); private KieContainerCommandService batchCommandService; public TextBasedIoHandlerAdapter(KieContainerCommandService batchCommandService) { this.batchCommandService = batchCommandService; } @Override public void messageReceived( IoSession session, Object message ) throws Exception { String completeMessage = message.toString(); logger.debug(\"Received message '{}'\", completeMessage); if( completeMessage.trim().equalsIgnoreCase(\"quit\") || completeMessage.trim().equalsIgnoreCase(\"exit\") ) { session.close(false); return; } String[] elements = completeMessage.split(\"\\\\|\"); logger.debug(\"Container id {}\", elements[0]); try { ServiceResponse<String> result = batchCommandService.callContainer(elements[0], elements[1], MarshallingFormat.JSON, null); if (result.getType().equals(ServiceResponse.ResponseType.SUCCESS)) { session.write(result.getResult()); logger.debug(\"Successful message written with content '{}'\", result.getResult()); } else { session.write(result.getMsg()); logger.debug(\"Failure message written with content '{}'\", result.getMsg()); } } catch (Exception e) { } } }",
"Drools-Mina KIE Server extension -- Mina server started at localhost and port 9123 Drools-Mina KIE Server extension has been successfully registered as server extension",
"telnet 127.0.0.1 9123",
"Trying 127.0.0.1 Connected to localhost. Escape character is '^]'. Request body: demo|{\"lookup\":\"defaultKieSession\",\"commands\":[{\"insert\":{\"object\":{\"org.jbpm.test.Person\":{\"name\":\"john\",\"age\":25}}}},{\"fire-all-rules\":\"\"}]} Server response: { \"results\" : [ { \"key\" : \"\", \"value\" : 1 } ], \"facts\" : [ ] } demo|{\"lookup\":\"defaultKieSession\",\"commands\":[{\"insert\":{\"object\":{\"org.jbpm.test.Person\":{\"name\":\"mary\",\"age\":22}}}},{\"fire-all-rules\":\"\"}]} { \"results\" : [ { \"key\" : \"\", \"value\" : 1 } ], \"facts\" : [ ] } demo|{\"lookup\":\"defaultKieSession\",\"commands\":[{\"insert\":{\"object\":{\"org.jbpm.test.Person\":{\"name\":\"james\",\"age\":25}}}},{\"fire-all-rules\":\"\"}]} { \"results\" : [ { \"key\" : \"\", \"value\" : 1 } ], \"facts\" : [ ] } exit Connection closed by foreign host.",
"16:33:40,206 INFO [stdout] (NioProcessor-2) Hello john 16:34:03,877 INFO [stdout] (NioProcessor-2) Hello mary 16:34:19,800 INFO [stdout] (NioProcessor-2) Hello james",
"<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <version>USD{version.org.kie}</version> </dependency> </dependencies>",
"public interface RulesMinaServicesClient extends RuleServicesClient { }",
"public class RulesMinaServicesClientImpl implements RulesMinaServicesClient { private String host; private Integer port; private Marshaller marshaller; public RulesMinaServicesClientImpl(KieServicesConfiguration configuration, ClassLoader classloader) { String[] serverDetails = configuration.getServerUrl().split(\":\"); this.host = serverDetails[0]; this.port = Integer.parseInt(serverDetails[1]); this.marshaller = MarshallerFactory.getMarshaller(configuration.getExtraJaxbClasses(), MarshallingFormat.JSON, classloader); } public ServiceResponse<String> executeCommands(String id, String payload) { try { String response = sendReceive(id, payload); if (response.startsWith(\"{\")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException(\"Unable to send request to KIE Server\", e); } } public ServiceResponse<String> executeCommands(String id, Command<?> cmd) { try { String response = sendReceive(id, marshaller.marshall(cmd)); if (response.startsWith(\"{\")) { return new ServiceResponse<String>(ResponseType.SUCCESS, null, response); } else { return new ServiceResponse<String>(ResponseType.FAILURE, response); } } catch (Exception e) { throw new KieServicesException(\"Unable to send request to KIE Server\", e); } } protected String sendReceive(String containerId, String content) throws Exception { // Flatten the content to be single line: content = content.replaceAll(\"\\\\n\", \"\"); Socket minaSocket = null; PrintWriter out = null; BufferedReader in = null; StringBuffer data = new StringBuffer(); try { minaSocket = new Socket(host, port); out = new PrintWriter(minaSocket.getOutputStream(), true); in = new BufferedReader(new InputStreamReader(minaSocket.getInputStream())); // Prepare and send data: out.println(containerId + \"|\" + content); // Wait for the first line: data.append(in.readLine()); // Continue as long as data is available: while (in.ready()) { data.append(in.readLine()); } return data.toString(); } finally { out.close(); in.close(); minaSocket.close(); } } }",
"public class MinaClientBuilderImpl implements KieServicesClientBuilder { 1 public String getImplementedCapability() { 2 return \"BRM-Mina\"; } public Map<Class<?>, Object> build(KieServicesConfiguration configuration, ClassLoader classLoader) { 3 Map<Class<?>, Object> services = new HashMap<Class<?>, Object>(); services.put(RulesMinaServicesClient.class, new RulesMinaServicesClientImpl(configuration, classLoader)); return services; } }",
"protected RulesMinaServicesClient buildClient() { KieServicesConfiguration configuration = KieServicesFactory.newRestConfiguration(\"localhost:9123\", null, null); List<String> capabilities = new ArrayList<String>(); // Explicitly add capabilities (the MINA client does not respond to `get-server-info` requests): capabilities.add(\"BRM-Mina\"); configuration.setCapabilities(capabilities); configuration.setMarshallingFormat(MarshallingFormat.JSON); configuration.addJaxbClasses(extraClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(configuration); RulesMinaServicesClient rulesClient = kieServicesClient.getServicesClient(RulesMinaServicesClient.class); return rulesClient; }",
"RulesMinaServicesClient rulesClient = buildClient(); List<Command<?>> commands = new ArrayList<Command<?>>(); BatchExecutionCommand executionCommand = commandsFactory.newBatchExecution(commands, \"defaultKieSession\"); Person person = new Person(); person.setName(\"mary\"); commands.add(commandsFactory.newInsert(person, \"person\")); commands.add(commandsFactory.newFireAllRules(\"fired\")); ServiceResponse<String> response = rulesClient.executeCommands(containerId, executionCommand); Assert.assertNotNull(response); Assert.assertEquals(ResponseType.SUCCESS, response.getType()); String data = response.getResult(); Marshaller marshaller = MarshallerFactory.getMarshaller(extraClasses, MarshallingFormat.JSON, this.getClass().getClassLoader()); ExecutionResultImpl results = marshaller.unmarshall(data, ExecutionResultImpl.class); Assert.assertNotNull(results); Object personResult = results.getValue(\"person\"); Assert.assertTrue(personResult instanceof Person); Assert.assertEquals(\"mary\", ((Person) personResult).getName()); Assert.assertEquals(\"JBoss Community\", ((Person) personResult).getAddress()); Assert.assertEquals(true, ((Person) personResult).isRegistered());"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/kie-server-extensions-con_execution-server |
Chapter 3. Obtaining and modifying container images | Chapter 3. Obtaining and modifying container images A containerized overcloud requires access to a registry with the required container images. This chapter provides information on how to prepare the registry and your undercloud and overcloud configuration to use container images for Red Hat OpenStack Platform. 3.1. Preparing container images The overcloud installation requires an environment file to determine where to obtain container images and how to store them. Generate and customize this environment file that you can use to prepare your container images. Note If you need to configure specific container image versions for your overcloud, you must pin the images to a specific version. For more information, see Pinning container images for the overcloud . Procedure Log in to your undercloud host as the stack user. Generate the default container image preparation file: This command includes the following additional options: --local-push-destination sets the registry on the undercloud as the location for container images. This means that director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. Director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file is containers-prepare-parameter.yaml . Note You can use the same containers-prepare-parameter.yaml file to define a container image source for both the undercloud and the overcloud. Modify the containers-prepare-parameter.yaml to suit your requirements. 3.2. Container image preparation parameters The default file for preparing your containers ( containers-prepare-parameter.yaml ) contains the ContainerImagePrepare heat parameter. This parameter defines a list of strategies for preparing a set of images: Each strategy accepts a set of sub-parameters that defines which images to use and what to do with the images. The following table contains information about the sub-parameters that you can use with each ContainerImagePrepare strategy: Parameter Description excludes List of regular expressions to exclude image names from a strategy. includes List of regular expressions to include in a strategy. At least one image name must match an existing image. All excludes are ignored if includes is specified. modify_append_tag String to append to the tag for the destination image. For example, if you pull an image with the tag 17.0.0-5.161 and set the modify_append_tag to -hotfix , the director tags the final image as 17.0.0-5.161-hotfix. modify_only_with_labels A dictionary of image labels that filter the images that you want to modify. If an image matches the labels defined, the director includes the image in the modification process. modify_role String of ansible role names to run during upload but before pushing the image to the destination registry. modify_vars Dictionary of variables to pass to modify_role . push_destination Defines the namespace of the registry that you want to push images to during the upload process. If set to true , the push_destination is set to the undercloud registry namespace using the hostname, which is the recommended method. If set to false , the push to a local registry does not occur and nodes pull images directly from the source. If set to a custom value, director pushes images to an external local registry. If you set this parameter to false in production environments while pulling images directly from Red Hat Container Catalog, all overcloud nodes will simultaneously pull the images from the Red Hat Container Catalog over your external connection, which can cause bandwidth issues. Only use false to pull directly from a Red Hat Satellite Server hosting the container images. If the push_destination parameter is set to false or is not defined and the remote registry requires authentication, set the ContainerImageRegistryLogin parameter to true and include the credentials with the ContainerImageRegistryCredentials parameter. pull_source The source registry from where to pull the original container images. set A dictionary of key: value definitions that define where to obtain the initial images. tag_from_label Use the value of specified container image metadata labels to create a tag for every image and pull that tagged image. For example, if you set tag_from_label: {version}-{release} , director uses the version and release labels to construct a new tag. For one container, version might be set to 17.0.0 and release might be set to 5.161 , which results in the tag 17.0.0-5.161. Director uses this parameter only if you have not defined tag in the set dictionary. Important When you push images to the undercloud, use push_destination: true instead of push_destination: UNDERCLOUD_IP:PORT . The push_destination: true method provides a level of consistency across both IPv4 and IPv6 addresses. The set parameter accepts a set of key: value definitions: Key Description ceph_image The name of the Ceph Storage container image. ceph_namespace The namespace of the Ceph Storage container image. ceph_tag The tag of the Ceph Storage container image. ceph_alertmanager_image ceph_alertmanager_namespace ceph_alertmanager_tag The name, namespace, and tag of the Ceph Storage Alert Manager container image. ceph_grafana_image ceph_grafana_namespace ceph_grafana_tag The name, namespace, and tag of the Ceph Storage Grafana container image. ceph_node_exporter_image ceph_node_exporter_namespace ceph_node_exporter_tag The name, namespace, and tag of the Ceph Storage Node Exporter container image. ceph_prometheus_image ceph_prometheus_namespace ceph_prometheus_tag The name, namespace, and tag of the Ceph Storage Prometheus container image. name_prefix A prefix for each OpenStack service image. name_suffix A suffix for each OpenStack service image. namespace The namespace for each OpenStack service image. neutron_driver The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard neutron-server container. Set to ovn to use OVN-based containers. tag Sets a specific tag for all images from the source. If not defined, director uses the Red Hat OpenStack Platform version number as the default value. This parameter takes precedence over the tag_from_label value. Note The container images use multi-stream tags based on the Red Hat OpenStack Platform version. This means that there is no longer a latest tag. 3.3. Guidelines for container image tagging The Red Hat Container Registry uses a specific version format to tag all Red Hat OpenStack Platform container images. This format follows the label metadata for each container, which is version-release . version Corresponds to a major and minor version of Red Hat OpenStack Platform. These versions act as streams that contain one or more releases. release Corresponds to a release of a specific container image version within a version stream. For example, if the latest version of Red Hat OpenStack Platform is 17.0.0 and the release for the container image is 5.161 , then the resulting tag for the container image is 17.0.0-5.161. The Red Hat Container Registry also uses a set of major and minor version tags that link to the latest release for that container image version. For example, both 17.0 and 17.0.0 link to the latest release in the 17.0.0 container stream. If a new minor release of 17.0 occurs, the 17.0 tag links to the latest release for the new minor release stream while the 17.0.0 tag continues to link to the latest release within the 17.0.0 stream. The ContainerImagePrepare parameter contains two sub-parameters that you can use to determine which container image to download. These sub-parameters are the tag parameter within the set dictionary, and the tag_from_label parameter. Use the following guidelines to determine whether to use tag or tag_from_label . The default value for tag is the major version for your OpenStack Platform version. For this version it is 17.0. This always corresponds to the latest minor version and release. To change to a specific minor version for OpenStack Platform container images, set the tag to a minor version. For example, to change to 17.0.2, set tag to 17.0.2. When you set tag , director always downloads the latest container image release for the version set in tag during installation and updates. If you do not set tag , director uses the value of tag_from_label in conjunction with the latest major version. The tag_from_label parameter generates the tag from the label metadata of the latest container image release it inspects from the Red Hat Container Registry. For example, the labels for a certain container might use the following version and release metadata: The default value for tag_from_label is {version}-{release} , which corresponds to the version and release metadata labels for each container image. For example, if a container image has 17.0.0 set for version and 5.161 set for release , the resulting tag for the container image is 17.0.0-5.161. The tag parameter always takes precedence over the tag_from_label parameter. To use tag_from_label , omit the tag parameter from your container preparation configuration. A key difference between tag and tag_from_label is that director uses tag to pull an image only based on major or minor version tags, which the Red Hat Container Registry links to the latest image release within a version stream, while director uses tag_from_label to perform a metadata inspection of each container image so that director generates a tag and pulls the corresponding image. 3.4. Obtaining container images from private registries The registry.redhat.io registry requires authentication to access and pull images. To authenticate with registry.redhat.io and other private registries, include the ContainerImageRegistryCredentials and ContainerImageRegistryLogin parameters in your containers-prepare-parameter.yaml file. ContainerImageRegistryCredentials Some container image registries require authentication to access images. In this situation, use the ContainerImageRegistryCredentials parameter in your containers-prepare-parameter.yaml environment file. The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry URL. Each private registry URL uses its own key and value pair to define the username (key) and password (value). This provides a method to specify credentials for multiple private registries. In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. To specify authentication details for multiple registries, set multiple key-pair values for each registry in ContainerImageRegistryCredentials : Important The default ContainerImagePrepare parameter pulls container images from registry.redhat.io , which requires authentication. For more information, see Red Hat Container Registry Authentication . ContainerImageRegistryLogin The ContainerImageRegistryLogin parameter is used to control whether an overcloud node system needs to log in to the remote registry to fetch the container images. This situation occurs when you want the overcloud nodes to pull images directly, rather than use the undercloud to host images. You must set ContainerImageRegistryLogin to true if push_destination is set to false or not used for a given strategy. However, if the overcloud nodes do not have network connectivity to the registry hosts defined in ContainerImageRegistryCredentials and you set ContainerImageRegistryLogin to true , the deployment might fail when trying to perform a login. If the overcloud nodes do not have network connectivity to the registry hosts defined in the ContainerImageRegistryCredentials , set push_destination to true and ContainerImageRegistryLogin to false so that the overcloud nodes pull images from the undercloud. 3.5. Layering image preparation entries The value of the ContainerImagePrepare parameter is a YAML list. This means that you can specify multiple entries. The following example demonstrates two entries where director uses the latest version of all images except for the nova-api image, which uses the version tagged with 17.0-hotfix : The includes and excludes parameters use regular expressions to control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must match the includes or excludes regular expression value to be considered a match. 3.6. Modifying images during preparation It is possible to modify images during image preparation, and then immediately deploy the overcloud with modified images. Note Red Hat OpenStack Platform (RHOSP) director supports modifying images during preparation for RHOSP containers, not for Ceph containers. Scenarios for modifying images include: As part of a continuous integration pipeline where images are modified with the changes being tested before deployment. As part of a development workflow where local changes must be deployed for testing and development. When changes must be deployed but are not available through an image build pipeline. For example, adding proprietary add-ons or emergency fixes. To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the heat parameters to refer to the modified image. The Ansible role tripleo-modify-image conforms with the required role interface and provides the behaviour necessary for the modify use cases. Control the modification with the modify-specific keys in the ContainerImagePrepare parameter: modify_role specifies the Ansible role to invoke for each image to modify. modify_append_tag appends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if the push_destination registry already contains the modified image. Change modify_append_tag whenever you modify the image. modify_vars is a dictionary of Ansible variables to pass to the role. To select a use case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role. While developing and testing the ContainerImagePrepare entries that modify images, run the image prepare command without any additional options to confirm that the image is modified as you expect: Important To use the openstack tripleo container image prepare command, your undercloud must contain a running image-serve registry. As a result, you cannot run this command before a new undercloud installation because the image-serve registry will not be installed. You can run this command after a successful undercloud installation. 3.7. Updating existing packages on container images Note Red Hat OpenStack Platform (RHOSP) director supports updating existing packages on container images for RHOSP containers, not for Ceph containers. Procedure The following example ContainerImagePrepare entry updates in all packages on the container images by using the dnf repository configuration of the undercloud host: 3.8. Installing additional RPM files to container images You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package that is not available through a package repository. Note Red Hat OpenStack Platform (RHOSP) director supports installing additional RPM files to container images for RHOSP containers, not for Ceph containers. Procedure The following example ContainerImagePrepare entry installs some hotfix packages on only the nova-compute image: 3.9. Modifying container images with a custom Dockerfile You can specify a directory that contains a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives. Note Red Hat OpenStack Platform (RHOSP) director supports modifying container images with a custom Dockerfile for RHOSP containers, not for Ceph containers. Procedure The following example runs the custom Dockerfile on the nova-compute image: The following example shows the /home/stack/nova-custom/Dockerfile file. After you run any USER root directives, you must switch back to the original image default user: 3.10. Preparing a Satellite server for container images Red Hat Satellite 6 offers registry synchronization capabilities. This provides a method to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more information about managing container images, see Managing Container Images in the Red Hat Satellite 6 Content Management Guide . The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME . Substitute this organization for your own Satellite 6 organization. Note This procedure requires authentication credentials to access container images from registry.redhat.io . Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. For more information, see "Red Hat Container Registry Authentication" . Procedure Create a list of all container images: If you plan to install Ceph and enable the Ceph Dashboard, you need the following ose-prometheus containers: Copy the satellite_images file to a system that contains the Satellite 6 hammer tool. Alternatively, use the instructions in the Hammer CLI Guide to install the hammer tool to the undercloud. Run the following hammer command to create a new product ( OSP Containers ) in your Satellite organization: This custom product will contain your images. Add the overcloud container images from the satellite_images file: Add the Ceph Storage container image: Note If you want to install the Ceph dashboard, include --name rhceph-5-dashboard-rhel8 in the hammer repository create command: Synchronize the container images: Wait for the Satellite server to complete synchronization. Note Depending on your configuration, hammer might ask for your Satellite server username and password. You can configure hammer to automatically login using a configuration file. For more information, see the Authentication section in the Hammer CLI Guide . If your Satellite 6 server uses content views, create a new content view version to incorporate the images and promote it along environments in your application life cycle. This largely depends on how you structure your application lifecycle. For example, if you have an environment called production in your lifecycle and you want the container images to be available in that environment, create a content view that includes the container images and promote that content view to the production environment. For more information, see Managing Content Views . Check the available tags for the base image: This command displays tags for the OpenStack Platform container images within a content view for a particular environment. Return to the undercloud and generate a default environment file that prepares images using your Satellite server as a source. Run the following example command to generate the environment file: --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images for the undercloud. In this case, the name of the file is containers-prepare-parameter.yaml . Edit the containers-prepare-parameter.yaml file and modify the following parameters: push_destination - Set this to true or false depending on your chosen container image management strategy. If you set this parameter to false , the overcloud nodes pull images directly from the Satellite. If you set this parameter to true , the director pulls the images from the Satellite to the undercloud registry and the overcloud pulls the images from the undercloud registry. namespace - The URL of the registry on the Satellite server. name_prefix - The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views: If you use content views, the structure is [org]-[environment]-[content view]-[product]- . For example: acme-production-myosp16-osp_containers- . If you do not use content views, the structure is [org]-[product]- . For example: acme-osp_containers- . ceph_namespace , ceph_image , ceph_tag - If you use Ceph Storage, include these additional parameters to define the Ceph Storage container image location. Note that ceph_image now includes a Satellite-specific prefix. This prefix is the same value as the name_prefix option. The following example environment file contains Satellite-specific parameters: Note To use a specific container image version stored on your Red Hat Satellite Server, set the tag key-value pair to the specific version in the set dictionary. For example, to use the 17.0.2 image stream, set tag: 17.0.2 in the set dictionary. You must define the containers-prepare-parameter.yaml environment file in the undercloud.conf configuration file, otherwise the undercloud uses the default values: | [
"openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml",
"parameter_defaults: ContainerImagePrepare: - (strategy one) - (strategy two) - (strategy three)",
"parameter_defaults: ContainerImagePrepare: - set: tag: 17.0",
"parameter_defaults: ContainerImagePrepare: - set: tag: 17.0.2",
"parameter_defaults: ContainerImagePrepare: - set: # tag: 17.0 tag_from_label: '{version}-{release}'",
"\"Labels\": { \"release\": \"5.161\", \"version\": \"17.0.0\", }",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ - push_destination: true set: namespace: registry.internalsite.com/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' registry.internalsite.com: myuser2: '0th3rp@55w0rd!' '192.0.2.1:8787': myuser3: '@n0th3rp@55w0rd!'",
"parameter_defaults: ContainerImagePrepare: - push_destination: false set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: true",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: false",
"parameter_defaults: ContainerImagePrepare: - tag_from_label: \"{version}-{release}\" push_destination: true excludes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel9 name_prefix: openstack- name_suffix: '' tag:17.0 - push_destination: true includes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel9 tag: 17.0-hotfix",
"sudo openstack tripleo container image prepare -e ~/containers-prepare-parameter.yaml",
"ContainerImagePrepare: - push_destination: true modify_role: tripleo-modify-image modify_append_tag: \"-updated\" modify_vars: tasks_from: yum_update.yml compare_host_packages: true yum_repos_dir_path: /etc/yum.repos.d",
"ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: rpm_install.yml rpms_path: /home/stack/nova-hotfix-pkgs",
"ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: modify_image.yml modify_dir_path: /home/stack/nova-custom",
"FROM registry.redhat.io/rhosp-rhel9/openstack-nova-compute:latest USER \"root\" COPY customize.sh /tmp/ RUN /tmp/customize.sh USER \"nova\"",
"sudo podman search --limit 1000 \"registry.redhat.io/rhosp-rhel9\" --format=\"{{ .Name }}\" | sort > satellite_images sudo podman search --limit 1000 \"registry.redhat.io/rhceph\" | grep rhceph-5-dashboard-rhel8 sudo podman search --limit 1000 \"registry.redhat.io/rhceph\" | grep rhceph-5-rhel8 sudo podman search --limit 1000 \"registry.redhat.io/openshift\" | grep ose-prometheus",
"registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 registry.redhat.io/openshift4/ose-prometheus:v4.6 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6",
"hammer product create --organization \"ACME\" --name \"OSP Containers\"",
"while read IMAGE; do IMAGE_NAME=USD(echo USDIMAGE | cut -d\"/\" -f3 | sed \"s/openstack-//g\") ; IMAGE_NOURL=USD(echo USDIMAGE | sed \"s/registry.redhat.io\\///g\") ; hammer repository create --organization \"ACME\" --product \"OSP Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name USDIMAGE_NOURL --upstream-username USERNAME --upstream-password PASSWORD --name USDIMAGE_NAME ; done < satellite_images",
"hammer repository create --organization \"ACME\" --product \"OSP Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name rhceph/rhceph-5-rhel8 --upstream-username USERNAME --upstream-password PASSWORD --name rhceph-5-rhel8",
"hammer repository create --organization \"ACME\" --product \"OSP Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name rhceph/rhceph-5-dashboard-rhel8 --upstream-username USERNAME --upstream-password PASSWORD --name rhceph-5-dashboard-rhel8",
"hammer product synchronize --organization \"ACME\" --name \"OSP Containers\"",
"hammer docker tag list --repository \"base\" --organization \"ACME\" --lifecycle-environment \"production\" --product \"OSP Containers\"",
"sudo openstack tripleo container image prepare default --output-env-file containers-prepare-parameter.yaml",
"parameter_defaults: ContainerImagePrepare: - push_destination: false set: ceph_image: acme-production-myosp16_1-osp_containers-rhceph-5 ceph_namespace: satellite.example.com:5000 ceph_tag: latest name_prefix: acme-production-myosp16_1-osp_containers- name_suffix: '' namespace: satellite.example.com:5000 neutron_driver: null tag: '17.0'",
"container_images_file = /home/stack/containers-prepare-parameter.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/transitioning_to_containerized_services/assembly_obtaining-and-modifying-container-images |
Installation Guide | Installation Guide Red Hat Ceph Storage 4 Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/index |
8.3.6. Raw Audit Messages | 8.3.6. Raw Audit Messages Raw audit messages are logged to /var/log/audit/audit.log . The following is an example AVC denial (and the associated system call) that occurred when the Apache HTTP Server (running in the httpd_t domain) attempted to access the /var/www/html/file1 file (labeled with the samba_share_t type): { getattr } The item in the curly brackets indicates the permission that was denied. The getattr entry indicates the source process was trying to read the target file's status information. This occurs before reading files. This action is denied due to the file being accessed having a wrong label. Commonly seen permissions include getattr , read , and write . comm=" httpd " The executable that launched the process. The full path of the executable is found in the exe= section of the system call ( SYSCALL ) message, which in this case, is exe="/usr/sbin/httpd" . path=" /var/www/html/file1 " The path to the object (target) the process attempted to access. scontext=" unconfined_u:system_r:httpd_t:s0 " The SELinux context of the process that attempted the denied action. In this case, it is the SELinux context of the Apache HTTP Server, which is running in the httpd_t domain. tcontext=" unconfined_u:object_r:samba_share_t:s0 " The SELinux context of the object (target) the process attempted to access. In this case, it is the SELinux context of file1 . Note that the samba_share_t type is not accessible to processes running in the httpd_t domain. In certain situations, the tcontext may match the scontext , for example, when a process attempts to execute a system service that will change characteristics of that running process, such as the user ID. Also, the tcontext may match the scontext when a process tries to use more resources (such as memory) than normal limits allow, resulting in a security check to see if that process is allowed to break those limits. From the system call ( SYSCALL ) message, two items are of interest: success= no : indicates whether the denial (AVC) was enforced or not. success=no indicates the system call was not successful (SELinux denied access). success=yes indicates the system call was successful. This can be seen for permissive domains or unconfined domains, such as initrc_t and kernel_t . exe=" /usr/sbin/httpd " : the full path to the executable that launched the process, which in this case, is exe="/usr/sbin/httpd" . An incorrect file type is a common cause for SELinux denying access. To start troubleshooting, compare the source context ( scontext ) with the target context ( tcontext ). Should the process ( scontext ) be accessing such an object ( tcontext )? For example, the Apache HTTP Server ( httpd_t ) should only be accessing types specified in the httpd_selinux (8) manual page, such as httpd_sys_content_t , public_content_t , and so on, unless configured otherwise. | [
"type=AVC msg=audit(1226874073.147:96): avc: denied { getattr } for pid=2465 comm=\"httpd\" path=\"/var/www/html/file1\" dev=dm-0 ino=284133 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file type=SYSCALL msg=audit(1226874073.147:96): arch=40000003 syscall=196 success=no exit=-13 a0=b98df198 a1=bfec85dc a2=54dff4 a3=2008171 items=0 ppid=2463 pid=2465 auid=502 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=6 comm=\"httpd\" exe=\"/usr/sbin/httpd\" subj=unconfined_u:system_r:httpd_t:s0 key=(null)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-raw_audit_messages |
7.221. udisks | 7.221. udisks 7.221.1. RHBA-2015:1336 - udisks bug fix and enhancement update Updated udisks packages that fix one bug and add two enhancements are now available for Red Hat Enterprise Linux 6. The udisks packages provide a daemon, D-Bus API, and command-line tools for managing disks and storage devices. Bug Fix BZ# 1121742 Prior to this update, an external storage device could be unmounted forcefully when a device entered the DM_SUSPENDED=1 state for a moment while performing a set of changes during the cleanup procedure. To fix this bug, an exception for ignoring such a device in the cleanup procedure has been added to the UDisks daemon. As a result, DeviceMapper devices are no longer unmounted forcefully in the described situation. Enhancements BZ# 673102 With this update, additional mount points and a list of allowed mount options can be specified by means of udev rules. Flexibility of the udev rules format enables the system administrator to write custom rules to enforce or limit specific mount options for a specific set of devices. For example, USB drives can be limited to be always mounted as read-only. BZ# 681875 This update enables the user to configure the udisks tool to enforce the "noexec" global option on all unprivileged users mount points. On desktop systems, the "noexec" option can protect users from mistakenly running certain applications. Users of udisks are advised to upgrade to these updated packages, which fix this bug and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-udisks |
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator | Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator 3.1. Release notes 3.1.1. Custom Metrics Autoscaler Operator release notes The release notes for the Custom Metrics Autoscaler Operator for Red Hat OpenShift describe new features and enhancements, deprecated features, and known issues. The Custom Metrics Autoscaler Operator uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OpenShift Container Platform horizontal pod autoscaler (HPA). Note The Custom Metrics Autoscaler Operator for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 3.1.1.1. Supported versions The following table defines the Custom Metrics Autoscaler Operator versions for each OpenShift Container Platform version. Version OpenShift Container Platform version General availability 2.14.1 4.16 General availability 2.14.1 4.15 General availability 2.14.1 4.14 General availability 2.14.1 4.13 General availability 2.14.1 4.12 General availability 3.1.1.2. Custom Metrics Autoscaler Operator 2.14.1-467 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-467 provides a CVE and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:7348 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.1.2.1. Bug fixes Previously, the root file system of the Custom Metrics Autoscaler Operator pod was writable, which is unnecessary and could present security issues. This update makes the pod root file system read-only, which addresses the potential security issue. ( OCPBUGS-37989 ) 3.1.2. Release notes for past releases of the Custom Metrics Autoscaler Operator The following release notes are for versions of the Custom Metrics Autoscaler Operator. For the current version, see Custom Metrics Autoscaler Operator release notes . 3.1.2.1. Custom Metrics Autoscaler Operator 2.14.1-454 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-454 provides a CVE, a new feature, and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:5865 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.1.1. New features and enhancements 3.1.2.1.1.1. Support for the Cron trigger with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use the Cron trigger to scale pods based on an hourly schedule. When your specified time frame starts, the Custom Metrics Autoscaler Operator scales pods to your desired amount. When the time frame ends, the Operator scales back down to the level. For more information, see Understanding the Cron trigger . 3.1.2.1.2. Bug fixes Previously, if you made changes to audit configuration parameters in the KedaController custom resource, the keda-metrics-server-audit-policy config map would not get updated. As a consequence, you could not change the audit configuration parameters after the initial deployment of the Custom Metrics Autoscaler. With this fix, changes to the audit configuration now render properly in the config map, allowing you to change the audit configuration any time after installation. ( OCPBUGS-32521 ) 3.1.2.2. Custom Metrics Autoscaler Operator 2.13.1 release notes This release of the Custom Metrics Autoscaler Operator 2.13.1-421 provides a new feature and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:4837 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.2.1. New features and enhancements 3.1.2.2.1.1. Support for custom certificates with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use custom service CA certificates to connect securely to TLS-enabled metrics sources, such as an external Kafka cluster or an external Prometheus service. By default, the Operator uses automatically-generated service certificates to connect to on-cluster services only. There is a new field in the KedaController object that allows you to load custom server CA certificates for connecting to external services by using config maps. For more information, see Custom CA certificates for the Custom Metrics Autoscaler . 3.1.2.2.2. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. Scaled objects containing cron triggers are currently not supported for the custom metrics autoscaler. ( OCPBUGS-34018 ) 3.1.2.3. Custom Metrics Autoscaler Operator 2.12.1-394 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-394 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:2901 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.3.1. Bug fixes Previously, the protojson.Unmarshal function entered into an infinite loop when unmarshaling certain forms of invalid JSON. This condition could occur when unmarshaling into a message that contains a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option is set. This release fixes this issue. ( OCPBUGS-30305 ) Previously, when parsing a multipart form, either explicitly with the Request.ParseMultipartForm method or implicitly with the Request.FormValue , Request.PostFormValue , or Request.FormFile method, the limits on the total size of the parsed form were not applied to the memory consumed. This could cause memory exhaustion. With this fix, the parsing process now correctly limits the maximum size of form lines while reading a single form line. ( OCPBUGS-30360 ) Previously, when following an HTTP redirect to a domain that is not on a matching subdomain or on an exact match of the initial domain, an HTTP client would not forward sensitive headers, such as Authorization or Cookie . For example, a redirect from example.com to www.example.com would forward the Authorization header, but a redirect to www.example.org would not forward the header. This release fixes this issue. ( OCPBUGS-30365 ) Previously, verifying a certificate chain that contains a certificate with an unknown public key algorithm caused the certificate verification process to panic. This condition affected all crypto and Transport Layer Security (TLS) clients and servers that set the Config.ClientAuth parameter to the VerifyClientCertIfGiven or RequireAndVerifyClientCert value. The default behavior is for TLS servers to not verify client certificates. This release fixes this issue. ( OCPBUGS-30370 ) Previously, if errors returned from the MarshalJSON method contained user-controlled data, an attacker could have used the data to break the contextual auto-escaping behavior of the HTML template package. This condition would allow for subsequent actions to inject unexpected content into the templates. This release fixes this issue. ( OCPBUGS-30397 ) Previously, the net/http and golang.org/x/net/http2 Go packages did not limit the number of CONTINUATION frames for an HTTP/2 request. This condition could result in excessive CPU consumption. This release fixes this issue. ( OCPBUGS-30894 ) 3.1.2.4. Custom Metrics Autoscaler Operator 2.12.1-384 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-384 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:2043 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.4.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-32395 ) 3.1.2.5. Custom Metrics Autoscaler Operator 2.12.1-376 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-376 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:1812 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.5.1. Bug fixes Previously, if invalid values such as nonexistent namespaces were specified in scaled object metadata, the underlying scaler clients would not free, or close, their client descriptors, resulting in a slow memory leak. This fix properly closes the underlying client descriptors when there are errors, preventing memory from leaking. ( OCPBUGS-30145 ) Previously the ServiceMonitor custom resource (CR) for the keda-metrics-apiserver pod was not functioning, because the CR referenced an incorrect metrics port name of http . This fix corrects the ServiceMonitor CR to reference the proper port name of metrics . As a result, the Service Monitor functions properly. ( OCPBUGS-25806 ) 3.1.2.6. Custom Metrics Autoscaler Operator 2.11.2-322 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-322 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2023:6144 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.6.1. Bug fixes Because the Custom Metrics Autoscaler Operator version 3.11.2-311 was released without a required volume mount in the Operator deployment, the Custom Metrics Autoscaler Operator pod would restart every 15 minutes. This fix adds the required volume mount to the Operator deployment. As a result, the Operator no longer restarts every 15 minutes. ( OCPBUGS-22361 ) 3.1.2.7. Custom Metrics Autoscaler Operator 2.11.2-311 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-311 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.11.2-311 were released in RHBA-2023:5981 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.7.1. New features and enhancements 3.1.2.7.1.1. Red Hat OpenShift Service on AWS (ROSA) and OpenShift Dedicated are now supported The Custom Metrics Autoscaler Operator 2.11.2-311 can be installed on OpenShift ROSA and OpenShift Dedicated managed clusters. versions of the Custom Metrics Autoscaler Operator could be installed only in the openshift-keda namespace. This prevented the Operator from being installed on OpenShift ROSA and OpenShift Dedicated clusters. This version of Custom Metrics Autoscaler allows installation to other namespaces such as openshift-operators or keda , enabling installation into ROSA and Dedicated clusters. 3.1.2.7.2. Bug fixes Previously, if the Custom Metrics Autoscaler Operator was installed and configured, but not in use, the OpenShift CLI reported the couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 error after any oc command was entered. The message, although harmless, could have caused confusion. With this fix, the Got empty response for: external.metrics... error no longer appears inappropriately. ( OCPBUGS-15779 ) Previously, any annotation or label change to objects managed by the Custom Metrics Autoscaler were reverted by Custom Metrics Autoscaler Operator any time the Keda Controller was modified, for example after a configuration change. This caused continuous changing of labels in your objects. The Custom Metrics Autoscaler now uses its own annotation to manage labels and annotations, and annotation or label are no longer inappropriately reverted. ( OCPBUGS-15590 ) 3.1.2.8. Custom Metrics Autoscaler Operator 2.10.1-267 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1-267 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1-267 were released in RHBA-2023:4089 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.8.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images did not contain time zone information. Because of this, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds now include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-15264 ) Previously, the Custom Metrics Autoscaler Operator would attempt to take ownership of all managed objects, including objects in other namespaces and cluster-scoped objects. Because of this, the Custom Metrics Autoscaler Operator was unable to create the role binding for reading the credentials necessary to be an API server. This caused errors in the kube-system namespace. With this fix, the Custom Metrics Autoscaler Operator skips adding the ownerReference field to any object in another namespace or any cluster-scoped object. As a result, the role binding is now created without any errors. ( OCPBUGS-15038 ) Previously, the Custom Metrics Autoscaler Operator added an ownerReferences field to the openshift-keda namespace. While this did not cause functionality problems, the presence of this field could have caused confusion for cluster administrators. With this fix, the Custom Metrics Autoscaler Operator does not add the ownerReference field to the openshift-keda namespace. As a result, the openshift-keda namespace no longer has a superfluous ownerReference field. ( OCPBUGS-15293 ) Previously, if you used a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter was set to none , the trigger would fail to scale. With this fix, the Custom Metrics Autoscaler for OpenShift now properly handles the none pod identity provider type. As a result, a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter sset to none now properly scales. ( OCPBUGS-15274 ) 3.1.2.9. Custom Metrics Autoscaler Operator 2.10.1 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1 were released in RHEA-2023:3199 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.9.1. New features and enhancements 3.1.2.9.1.1. Custom Metrics Autoscaler Operator general availability The Custom Metrics Autoscaler Operator is now generally available as of Custom Metrics Autoscaler Operator version 2.10.1. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.2.9.1.2. Performance metrics You can now use the Prometheus Query Language (PromQL) to query metrics on the Custom Metrics Autoscaler Operator. 3.1.2.9.1.3. Pausing the custom metrics autoscaling for scaled objects You can now pause the autoscaling of a scaled object, as needed, and resume autoscaling when ready. 3.1.2.9.1.4. Replica fall back for scaled objects You can now specify the number of replicas to fall back to if a scaled object fails to get metrics from the source. 3.1.2.9.1.5. Customizable HPA naming for scaled objects You can now specify a custom name for the horizontal pod autoscaler in scaled objects. 3.1.2.9.1.6. Activation and scaling thresholds Because the horizontal pod autoscaler (HPA) cannot scale to or from 0 replicas, the Custom Metrics Autoscaler Operator does that scaling, after which the HPA performs the scaling. You can now specify when the HPA takes over autoscaling, based on the number of replicas. This allows for more flexibility with your scaling policies. 3.1.2.10. Custom Metrics Autoscaler Operator 2.8.2-174 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2-174 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2-174 were released in RHEA-2023:1683 . Important The Custom Metrics Autoscaler Operator version 2.8.2-174 is a Technology Preview feature. 3.1.2.10.1. New features and enhancements 3.1.2.10.1.1. Operator upgrade support You can now upgrade from a prior version of the Custom Metrics Autoscaler Operator. See "Changing the update channel for an Operator" in the "Additional resources" for information on upgrading an Operator. 3.1.2.10.1.2. must-gather support You can now collect data about the Custom Metrics Autoscaler Operator and its components by using the OpenShift Container Platform must-gather tool. Currently, the process for using the must-gather tool with the Custom Metrics Autoscaler is different than for other operators. See "Gathering debugging data in the "Additional resources" for more information. 3.1.2.11. Custom Metrics Autoscaler Operator 2.8.2 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2 were released in RHSA-2023:1042 . Important The Custom Metrics Autoscaler Operator version 2.8.2 is a Technology Preview feature. 3.1.2.11.1. New features and enhancements 3.1.2.11.1.1. Audit Logging You can now gather and view audit logs for the Custom Metrics Autoscaler Operator and its associated components. Audit logs are security-relevant chronological sets of records that document the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 3.1.2.11.1.2. Scale applications based on Apache Kafka metrics You can now use the KEDA Apache kafka trigger/scaler to scale deployments based on an Apache Kafka topic. 3.1.2.11.1.3. Scale applications based on CPU metrics You can now use the KEDA CPU trigger/scaler to scale deployments based on CPU metrics. 3.1.2.11.1.4. Scale applications based on memory metrics You can now use the KEDA memory trigger/scaler to scale deployments based on memory metrics. 3.2. Custom Metrics Autoscaler Operator overview As a developer, you can use Custom Metrics Autoscaler Operator for Red Hat OpenShift to specify how OpenShift Container Platform should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. The Custom Metrics Autoscaler Operator is an optional Operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics. The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka metrics. The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers , also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OpenShift Container Platform can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object for a workload, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed. Note You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload. The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the minReplicaCount value in the custom metrics autoscaler CR to 0 , the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the activation phase . After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the scaling phase . Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with activation . For example, if the threshold parameter configures scaling, activationThreshold would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you can configure a higher activation phase to prevent scaling up or down if the metric is particularly low. The activation value has more priority than the scaling value in case of different decisions for each. For example, if the threshold is set to 10 , and the activationThreshold is 50 , if the metric reports 40 , the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances. Figure 3.1. Custom metrics autoscaler workflow You create or modify a scaled object custom resource for a workload on a cluster. The object contains the scaling configuration for that workload. Prior to accepting the new object, the OpenShift API server sends it to the custom metrics autoscaler admission webhooks process to ensure that the object is valid. If validation succeeds, the API server persists the object. The custom metrics autoscaler controller watches for new or modified scaled objects. When the OpenShift API server notifies the controller of a change, the controller monitors any external trigger sources, also known as data sources, that are specified in the object for changes to the metrics data. One or more scalers request scaling data from the external trigger source. For example, for a Kafka trigger type, the controller uses the Kafka scaler to communicate with a Kafka instance to obtain the data requested by the trigger. The controller creates a horizontal pod autoscaler object for the scaled object. As a result, the Horizontal Pod Autoscaler (HPA) Operator starts monitoring the scaling data associated with the trigger. The HPA requests scaling data from the cluster OpenShift API server endpoint. The OpenShift API server endpoint is served by the custom metrics autoscaler metrics adapter. When the metrics adapter receives a request for custom metrics, it uses a GRPC connection to the controller to request it for the most recent trigger data received from the scaler. The HPA makes scaling decisions based upon the data received from the metrics adapter and scales the workload up or down by increasing or decreasing the replicas. As a it operates, a workload can affect the scaling metrics. For example, if a workload is scaled up to handle work in a Kafka queue, the queue size decreases after the workload processes all the work. As a result, the workload is scaled down. If the metrics are in a range specified by the minReplicaCount value, the custom metrics autoscaler controller disables all scaling, and leaves the replica count at a fixed level. If the metrics exceed that range, the custom metrics autoscaler controller enables scaling and allows the HPA to scale the workload. While scaling is disabled, the HPA does not take any action. 3.2.1. Custom CA certificates for the Custom Metrics Autoscaler By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services. If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the KedaController custom resource as described in Installing the custom metrics autoscaler . The Operator loads those certificates on start-up and registers them as trusted by the Operator. The config maps can contain one or more certificate files that contain one or more PEM-encoded CA certificates. Or, you can use separate config maps for each certificate file. Note If you later update the config map to add additional certificates, you must restart the keda-operator-* pod for the changes to take effect. 3.3. Installing the custom metrics autoscaler You can use the OpenShift Container Platform web console to install the Custom Metrics Autoscaler Operator. The installation creates the following five CRDs: ClusterTriggerAuthentication KedaController ScaledJob ScaledObject TriggerAuthentication 3.3.1. Installing the custom metrics autoscaler You can use the following procedure to install the Custom Metrics Autoscaler Operator. Prerequisites Remove any previously-installed Technology Preview versions of the Cluster Metrics Autoscaler Operator. Remove any versions of the community-based KEDA. Also, remove the KEDA 1.x custom resource definitions by running the following commands: USD oc delete crd scaledobjects.keda.k8s.io USD oc delete crd triggerauthentications.keda.k8s.io Optional: If you need the Custom Metrics Autoscaler Operator to connect to off-cluster services, such as an external Kafka cluster or an external Prometheus service, put any required service CA certificates into a config map. The config map must exist in the same namespace where the Operator is installed. For example: USD oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Custom Metrics Autoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the All namespaces on the cluster (default) option is selected for Installation Mode . This installs the Operator in all namespaces. Ensure that the openshift-keda namespace is selected for Installed Namespace . OpenShift Container Platform creates the namespace, if not present in your cluster. Click Install . Verify the installation by listing the Custom Metrics Autoscaler Operator components: Navigate to Workloads Pods . Select the openshift-keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running. Navigate to Workloads Deployments to verify that the custom-metrics-autoscaler-operator deployment is running. Optional: Verify the installation in the OpenShift CLI using the following commands: USD oc get all -n openshift-keda The output appears similar to the following: Example output NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m Install the KedaController custom resource, which creates the required CRDs: In the OpenShift Container Platform web console, click Operators Installed Operators . Click Custom Metrics Autoscaler . On the Operator Details page, click the KedaController tab. On the KedaController tab, click Create KedaController and edit the file. kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: ["RequestReceived"] omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" serviceAccount: {} 1 Specifies a single namespace in which the Custom Metrics Autoscaler Operator should scale applications. Leave it blank or leave it empty to scale applications in all namespaces. This field should have a namespace or be empty. The default value is empty. 2 Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug , info , error . The default is info . 3 Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json . The default is console . 4 Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. 5 Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 for debug . The default is 0 . 6 Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. Click Create to create the KEDA controller. 3.4. Understanding custom metrics autoscaler triggers Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods. The custom metrics autoscaler currently supports the Prometheus, CPU, memory, Apache Kafka, and cron triggers. You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow. You can configure a certificate authority to use with your scaled objects or for all scalers in the cluster . 3.4.1. Understanding the Prometheus trigger You can scale pods based on Prometheus metrics, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring" for information on the configurations required to use the OpenShift Container Platform monitoring as a source for metrics. Note If Prometheus is collecting metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on. Example scaled object with a Prometheus target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: # ... triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job="test-app"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: "false" 9 unsafeSsl: "false" 10 1 Specifies Prometheus as the trigger type. 2 Specifies the address of the Prometheus server. This example uses OpenShift Container Platform monitoring. 3 Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if using OpenShift Container Platform monitoring as a source for the metrics. 4 Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique. 5 Specifies the value that triggers scaling. Must be specified as a quoted string value. 6 Specifies the Prometheus query to use. 7 Specifies the authentication method to use. Prometheus scalers support bearer authentication ( bearer ), basic authentication ( basic ), or TLS authentication ( tls ). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret. 8 Optional: Passes the X-Scope-OrgID header to multi-tenant Cortex or Mimir storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return. 9 Optional: Specifies how the trigger should proceed if the Prometheus target is lost. If true , the trigger continues to operate if the Prometheus target is lost. This is the default behavior. If false , the trigger returns an error if the Prometheus target is lost. 10 Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint. If false , the certificate check is performed. This is the default behavior. If true , the certificate check is not performed. Important Skipping the check is not recommended. 3.4.1.1. Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring You can use the installed OpenShift Container Platform Prometheus monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform. For your scaled objects to be able to read the OpenShift Container Platform Prometheus metrics, you must use a trigger authentication or a cluster trigger authentication in order to provide the authentication information required. The following procedure differs depending on which trigger authentication method you use. For more information on trigger authentications, see "Understanding custom metrics autoscaler trigger authentications". Note These steps are not required for an external Prometheus source. You must perform the following tasks, as described in this section: Create a service account. Create a secret that generates a token for the service account. Create the trigger authentication. Create a role. Add that role to the service account. Reference the token in the trigger authentication object used by Prometheus. Prerequisites OpenShift Container Platform monitoring must be installed. Monitoring of user-defined workloads must be enabled in OpenShift Container Platform monitoring, as described in the Creating a user-defined workload monitoring config map section. The Custom Metrics Autoscaler Operator must be installed. Procedure Change to the appropriate project: USD oc project <project_name> 1 1 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. Create a service account and token, if your cluster does not have one: Create a service account object by using the following command: USD oc create serviceaccount thanos 1 1 Specifies the name of the service account. Create a secret YAML to generate a service account token: apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token 1 Specifies the name of the service account. Create the secret object by using the following command: USD oc create -f <file_name>.yaml Use the following command to locate the token assigned to the service account: USD oc describe serviceaccount thanos 1 1 Specifies the name of the service account. Example output Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none> 1 Use this token in the trigger authentication. Create a trigger authentication with the service account token: Create a YAML file similar to the following: apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt 1 Specifies one of the following trigger authentication methods: If you are using a trigger authentication, specify TriggerAuthentication . This example configures a trigger authentication. If you are using a cluster trigger authentication, specify ClusterTriggerAuthentication . 2 Specifies that this object uses a secret for authorization. 3 Specifies the authentication parameter to supply by using the token. 4 Specifies the name of the token to use. 5 Specifies the key in the token to use with the specified parameter. Create the CR object: USD oc create -f <file-name>.yaml Create a role for reading Thanos metrics: Create a YAML file with the following parameters: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch Create the CR object: USD oc create -f <file-name>.yaml Create a role binding for reading Thanos metrics: Create a YAML file similar to the following: apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5 1 Specifies one of the following object types: If you are using a trigger authentication, specify RoleBinding . If you are using a cluster trigger authentication, specify ClusterRoleBinding . 2 Specifies the name of the role you created. 3 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. 4 Specifies the name of the service account to bind to the role. 5 Specifies the project where you previously created the service account. Create the CR object: USD oc create -f <file-name>.yaml You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in "Understanding how to add custom metrics autoscalers". To use OpenShift Container Platform monitoring as the source, in the trigger, or scaler, you must include the following parameters: triggers.type must be prometheus triggers.metadata.serverAddress must be https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 triggers.metadata.authModes must be bearer triggers.metadata.namespace must be set to the namespace of the object to scale triggers.authenticationRef must point to the trigger authentication resource specified in the step Additional resources Understanding custom metrics autoscaler trigger authentications 3.4.2. Understanding the CPU trigger You can scale pods based on CPU metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the CPU usage that you specify. The autoscaler increases or decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. The memory trigger considers the memory utilization of the entire pod. If the pod has multiple containers, the memory trigger considers the total memory utilization of all containers in the pod. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a CPU target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: # ... triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4 1 Specifies CPU as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Specifies the minimum number of replicas when scaling down. For a CPU trigger, enter a value of 1 or greater, because the HPA cannot scale to zero if you are using only CPU metrics. 3.4.3. Understanding the memory trigger You can scale pods based on memory metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the average memory usage that you specify. The autoscaler increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. The memory trigger considers the memory utilization of entire pod. If the pod has multiple containers, the memory utilization is the sum of all of the containers. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a memory target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: # ... triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4 1 Specifies memory as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Optional: Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. In this example, only the container named api is to be scaled. 3.4.4. Understanding the Kafka trigger You can scale pods based on an Apache Kafka topic or other services that support the Kafka protocol. The custom metrics autoscaler does not scale higher than the number of Kafka partitions, unless you set the allowIdleConsumers parameter to true in the scaled object or scaled job. Note If the number of consumer groups exceeds the number of partitions in a topic, the extra consumer groups remain idle. To avoid this, by default the number of replicas does not exceed: The number of partitions on a topic, if a topic is specified The number of partitions of all topics in the consumer group, if no topic is specified The maxReplicaCount specified in scaled object or scaled job CR You can use the allowIdleConsumers parameter to disable these default behaviors. Example scaled object with a Kafka target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: # ... triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13 1 Specifies Kafka as the trigger type. 2 Specifies the name of the Kafka topic on which Kafka is processing the offset lag. 3 Specifies a comma-separated list of Kafka brokers to connect to. 4 Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag. 5 Optional: Specifies the average target value that triggers scaling. Must be specified as a quoted string value. The default is 5 . 6 Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. 7 Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: latest and earliest . The default is latest . 8 Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic. If true , the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers. If false , the number of Kafka replicas cannot exceed the number of partitions on a topic. This is the default. 9 Specifies how the trigger behaves when a Kafka partition does not have a valid offset. If true , the consumers are scaled to zero for that partition. If false , the scaler keeps a single consumer for that partition. This is the default. 10 Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the polling cycle. If true , the scaler excludes partition lag in these partitions. If false , the trigger includes all consumer lag in all partitions. This is the default. 11 Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is 1.0.0 . 12 Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions. 13 Optional: Specifies whether to use TSL client authentication for Kafka. The default is disable . For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications". 3.4.5. Understanding the Cron trigger You can scale pods based on a time range. When the time range starts, the custom metrics autoscaler scales the pods associated with an object from the configured minimum number of pods to the specified number of desired pods. At the end of the time range, the pods are scaled back to the configured minimum. The time period must be configured in cron format . The following example scales the pods associated with this scaled object from 0 to 100 from 6:00 AM to 6:30 PM India Standard Time. Example scaled object with a Cron trigger apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: "0 6 * * *" 5 end: "30 18 * * *" 6 desiredReplicas: "100" 7 1 Specifies the minimum number of pods to scale down to at the end of the time frame. 2 Specifies the maximum number of replicas when scaling up. This value should be the same as desiredReplicas . The default is 100 . 3 Specifies a Cron trigger. 4 Specifies the timezone for the time frame. This value must be from the IANA Time Zone Database . 5 Specifies the start of the time frame. 6 Specifies the end of the time frame. 7 Specifies the number of pods to scale to between the start and end of the time frame. This value should be the same as maxReplicaCount . 3.5. Understanding custom metrics autoscaler trigger authentications A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OpenShift Container Platform secrets, platform-native pod authentication mechanisms, environment variables, and so on. You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace. Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces. Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object. Example secret for Basic authentication apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: "dXNlcm5hbWU=" 1 password: "cGFzc3dvcmQ=" 1 User name and password to supply to the trigger authentication. The values in a data stanza must be base-64 encoded. Example trigger authentication using a secret for Basic authentication kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example cluster trigger authentication with a secret for Basic authentication kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Note that no namespace is used with a cluster trigger authentication. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example secret with certificate authority (CA) details apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t... 1 Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded. 2 Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded. Example trigger authentication using a secret for CA details kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. 6 Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint. 7 Specifies the name of the secret to use. 8 Specifies the key in the secret to use with the specified parameter. Example secret with a bearer token apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" 1 1 Specifies a bearer token to use with bearer authentication. The value in a data stanza must be base-64 encoded. Example trigger authentication with a bearer token kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the token to use with the specified parameter. Example trigger authentication with an environment variable kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint. 3 Specify the parameter to set with this variable. 4 Specify the name of the environment variable. 5 Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object. Example trigger authentication with pod authentication providers kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint. 3 Specifies a pod identity. Supported values are none , azure , gcp , aws-eks , or aws-kiam . The default is none . Additional resources For information about OpenShift Container Platform secrets, see Providing sensitive data to pods . 3.5.1. Using trigger authentications You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you are using a secret, the Secret object must exist, for example: Example secret apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD> Procedure Create the TriggerAuthentication or ClusterTriggerAuthentication object. Create a YAML file that defines the object: Example trigger authentication with a secret kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD Create the TriggerAuthentication object: USD oc create -f <filename>.yaml Create or edit a ScaledObject YAML file that uses the trigger authentication: Create a YAML file that defines the object by running the following command: Example scaled object with a trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify TriggerAuthentication . TriggerAuthentication is the default. Example scaled object with a cluster trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify ClusterTriggerAuthentication . Create the scaled object by running the following command: USD oc apply -f <filename> 3.6. Pausing the custom metrics autoscaler for a scaled object You can pause and restart the autoscaling of a workload, as needed. For example, you might want to pause autoscaling before performing cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. 3.6.1. Pausing a custom metrics autoscaler You can pause the autoscaling of a scaled object by adding the autoscaling.keda.sh/paused-replicas annotation to the custom metrics autoscaler for that scaled object. The custom metrics autoscaler scales the replicas for that workload to the specified value and pauses autoscaling until the annotation is removed. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Add the autoscaling.keda.sh/paused-replicas annotation with any value: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling. 3.6.2. Restarting the custom metrics autoscaler for a scaled object You can restart a paused custom metrics autoscaler by removing the autoscaling.keda.sh/paused-replicas annotation for that ScaledObject . apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Remove the autoscaling.keda.sh/paused-replicas annotation. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Remove this annotation to restart a paused custom metrics autoscaler. 3.7. Gathering audit logs You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. For example, audit logs can help you understand where an autoscaling request is coming from. This is key information when backends are getting overloaded by autoscaling requests made by user applications and you need to determine which is the troublesome application. 3.7.1. Configuring audit logging You can configure auditing for the Custom Metrics Autoscaler Operator by editing the KedaController custom resource. The logs are sent to an audit log file on a volume that is secured by using a persistent volume claim in the KedaController CR. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Edit the KedaController custom resource to add the auditConfig stanza: kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: # ... metricsServer: # ... auditConfig: logFormat: "json" 1 logOutputVolumeClaim: "pvc-audit-log" 2 policy: rules: 3 - level: Metadata omitStages: "RequestReceived" 4 omitManagedFields: false 5 lifetime: 6 maxAge: "2" maxBackup: "1" maxSize: "50" 1 Specifies the output format of the audit log, either legacy or json . 2 Specifies an existing persistent volume claim for storing the log data. All requests coming to the API server are logged to this persistent volume claim. If you leave this field empty, the log data is sent to stdout. 3 Specifies which events should be recorded and what data they should include: None : Do not log events. Metadata : Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. This is the default. Request : Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. RequestResponse : Log event metadata, request text, and response text. This option does not apply for non-resource requests. 4 Specifies stages for which no event is created. 5 Specifies whether to omit the managed fields of the request and response bodies from being written to the API audit log, either true to omit the fields or false to include the fields. 6 Specifies the size and lifespan of the audit logs. maxAge : The maximum number of days to retain audit log files, based on the timestamp encoded in their filename. maxBackup : The maximum number of audit log files to retain. Set to 0 to retain all audit log files. maxSize : The maximum size in megabytes of an audit log file before it gets rotated. Verification View the audit log file directly: Obtain the name of the keda-metrics-apiserver-* pod: oc get pod -n openshift-keda Example output NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s View the log data by using a command similar to the following: USD oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: USD oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata Example output ... {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"4c81d41b-3dab-4675-90ce-20b87ce24013","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.131.0.1"],"userAgent":"kube-probe/1.28","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-02-16T13:00:03.554567Z","stageTimestamp":"2023-02-16T13:00:03.555032Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} ... Alternatively, you can view a specific log: Use a command similar to the following to log into the keda-metrics-apiserver-* pod: USD oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda For example: USD oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda Change to the /var/audit-policy/ directory: sh-4.4USD cd /var/audit-policy/ List the available logs: sh-4.4USD ls Example output log-2023.02.17-14:50 policy.yaml View the log, as needed: sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request Example output 3.8. Gathering debugging data When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. To help troubleshoot your issue, provide the following information: Data gathered using the must-gather tool. The unique cluster ID. You can use the must-gather tool to collect data about the Custom Metrics Autoscaler Operator and its components, including the following items: The openshift-keda namespace and its child objects. The Custom Metric Autoscaler Operator installation objects. The Custom Metric Autoscaler Operator CRD objects. 3.8.1. Gathering debugging data The following command runs the must-gather tool for the Custom Metrics Autoscaler Operator: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Note The standard OpenShift Container Platform must-gather command, oc adm must-gather , does not collect Custom Metrics Autoscaler Operator data. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default must-gather image as an image stream by running the following command. USD oc import-image is/must-gather -n openshift Perform one of the following: To get only the Custom Metrics Autoscaler Operator must-gather data, use the following command: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" The custom image for the must-gather command is pulled directly from the Operator package manifests, so that it works on any cluster where the Custom Metric Autoscaler Operator is available. To gather the default must-gather data in addition to the Custom Metric Autoscaler Operator information: Use the following command to obtain the Custom Metrics Autoscaler Operator image and set it as an environment variable: USD IMAGE="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Use the oc adm must-gather with the Custom Metrics Autoscaler Operator image: USD oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE} Example 3.1. Example must-gather output for the Custom Metric Autoscaler └── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── .insecure.log │ │ └── .log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . 3.9. Viewing Operator metrics The Custom Metrics Autoscaler Operator exposes ready-to-use metrics that it pulls from the on-cluster monitoring component. You can query the metrics by using the Prometheus Query Language (PromQL) to analyze and diagnose issues. All metrics are reset when the controller pod restarts. 3.9.1. Accessing performance metrics You can access the metrics and run queries by using the OpenShift Container Platform web console. Procedure Select the Administrator perspective in the OpenShift Container Platform web console. Select Observe Metrics . To create a custom query, add your PromQL query to the Expression field. To add multiple queries, select Add Query . 3.9.1.1. Provided Operator metrics The Custom Metrics Autoscaler Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console. Table 3.1. Custom Metric Autoscaler Operator metrics Metric name Description keda_scaler_activity Whether the particular scaler is active or inactive. A value of 1 indicates the scaler is active; a value of 0 indicates the scaler is inactive. keda_scaler_metrics_value The current value for each scaler's metric, which is used by the Horizontal Pod Autoscaler (HPA) in computing the target average. keda_scaler_metrics_latency The latency of retrieving the current metric from each scaler. keda_scaler_errors The number of errors that have occurred for each scaler. keda_scaler_errors_total The total number of errors encountered for all scalers. keda_scaled_object_errors The number of errors that have occurred for each scaled obejct. keda_resource_totals The total number of Custom Metrics Autoscaler custom resources in each namespace for each custom resource type. keda_trigger_totals The total number of triggers by trigger type. Custom Metrics Autoscaler Admission webhook metrics The Custom Metrics Autoscaler Admission webhook also exposes the following Prometheus metrics. Metric name Description keda_scaled_object_validation_total The number of scaled object validations. keda_scaled_object_validation_errors The number of validation errors. 3.10. Understanding how to add custom metrics autoscalers To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job. You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload. 3.10.1. Adding a custom metrics autoscaler to a workload You can create a custom metrics autoscaler for a workload that is created by a Deployment , StatefulSet , or custom resource object. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you use a custom metrics autoscaler for scaling based on CPU or memory: Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage. USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> The pods associated with the object you want to scale must include specified memory and CPU limits. For example: Example pod spec apiVersion: v1 kind: Pod # ... spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: "128Mi" cpu: "500m" # ... Procedure Create a YAML file similar to the following. Only the name <2> , object name <4> , and object kind <5> are required: Example scaled object apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "0" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: "RequestReceived" omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication 1 Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the "Pausing the custom metrics autoscaler for a workload" section. 2 Specifies a name for this custom metrics autoscaler. 3 Optional: Specifies the API version of the target resource. The default is apps/v1 . 4 Specifies the name of the object that you want to scale. 5 Specifies the kind as Deployment , StatefulSet or CustomResource . 6 Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 7 Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0 . The default is 300 . 8 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 9 Optional: Specifies the minimum number of replicas when scaling down. 10 Optional: Specifies the parameters for audit logs. as described in the "Configuring audit logging" section. 11 Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation . 12 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 13 Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false , which keeps the replica count as it is when the scaled object is deleted. 14 Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name} . 15 Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the "Scaling policies" section. 16 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. This example uses OpenShift Container Platform monitoring. 17 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledobject <scaled_object_name> Example output NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. FALLBACK : Indicates whether the custom metrics autoscaler is able to get metrics from the source If False , the custom metrics autoscaler is getting metrics. If True , the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created. 3.10.2. Adding a custom metrics autoscaler to a job You can create a custom metrics autoscaler for any Job object. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Create a YAML file similar to the following: kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: "custom" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: "0.5" pendingPodConditions: - "Ready" - "PodScheduled" - "AnyOtherCustomPodCondition" multipleScalersCalculation : "max" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "bearer" authenticationRef: 14 name: prom-cluster-triggerauthentication 1 Specifies the maximum duration the job can run. 2 Specifies the number of retries for a job. The default is 6 . 3 Optional: Specifies how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, the default is 1 . 4 Optional: Specifies how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, the default is 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset the default is the value of the parallelism parameter. 5 Specifies the template for the pod the controller creates. 6 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 7 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 8 Optional: Specifies the number of successful finished jobs should be kept. The default is 100 . 9 Optional: Specifies how many failed jobs should be kept. The default is 100 . 10 Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 11 Optional: Specifies whether existing jobs are terminated whenever a scaled job is being updated: default : The autoscaler terminates an existing job if its associated scaled job is updated. The autoscaler recreates the job with the latest specs. gradual : The autoscaler does not terminate an existing job if its associated scaled job is updated. The autoscaler creates new jobs with the latest specs. 12 Optional: Specifies a scaling strategy: default , custom , or accurate . The default is default . For more information, see the link in the "Additional resources" section that follows. 13 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. 14 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledjob <scaled_job_name> Example output NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. 3.10.3. Additional resources Understanding custom metrics autoscaler trigger authentications 3.11. Removing the Custom Metrics Autoscaler Operator You can remove the custom metrics autoscaler from your OpenShift Container Platform cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues. Note Delete the KedaController custom resource (CR) first. If you do not delete the KedaController CR, OpenShift Container Platform can hang when you delete the openshift-keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR. 3.11.1. Uninstalling the Custom Metrics Autoscaler Operator Use the following procedure to remove the custom metrics autoscaler from your OpenShift Container Platform cluster. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-keda project. Remove the KedaController custom resource. Find the CustomMetricsAutoscaler Operator and click the KedaController tab. Find the custom resource, and then click Delete KedaController . Click Uninstall . Remove the Custom Metrics Autoscaler Operator: Click Operators Installed Operators . Find the CustomMetricsAutoscaler Operator and click the Options menu and select Uninstall Operator . Click Uninstall . Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components: Delete the custom metrics autoscaler CRDs: clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh USD oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted. List any custom metrics autoscaler cluster roles: USD oc get clusterrole | grep keda.sh Delete the listed custom metrics autoscaler cluster roles. For example: USD oc delete clusterrole.keda.sh-v1alpha1-admin List any custom metrics autoscaler cluster role bindings: USD oc get clusterrolebinding | grep keda.sh Delete the listed custom metrics autoscaler cluster role bindings. For example: USD oc delete clusterrolebinding.keda.sh-v1alpha1-admin Delete the custom metrics autoscaler project: USD oc delete project openshift-keda Delete the Cluster Metric Autoscaler Operator: USD oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda | [
"oc delete crd scaledobjects.keda.k8s.io",
"oc delete crd triggerauthentications.keda.k8s.io",
"oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem",
"oc get all -n openshift-keda",
"NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10",
"oc project <project_name> 1",
"oc create serviceaccount thanos 1",
"apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token",
"oc create -f <file_name>.yaml",
"oc describe serviceaccount thanos 1",
"Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>",
"apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5",
"oc create -f <file-name>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7",
"apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3",
"apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD",
"oc create -f <filename>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2",
"oc apply -f <filename>",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"",
"get pod -n openshift-keda",
"NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s",
"oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1",
"oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda",
"oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda",
"sh-4.4USD cd /var/audit-policy/",
"sh-4.4USD ls",
"log-2023.02.17-14:50 policy.yaml",
"sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1",
"sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}",
"└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication",
"oc create -f <filename>.yaml",
"oc get scaledobject <scaled_object_name>",
"NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s",
"kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication",
"oc create -f <filename>.yaml",
"oc get scaledjob <scaled_job_name>",
"NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s",
"oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh",
"oc get clusterrole | grep keda.sh",
"oc delete clusterrole.keda.sh-v1alpha1-admin",
"oc get clusterrolebinding | grep keda.sh",
"oc delete clusterrolebinding.keda.sh-v1alpha1-admin",
"oc delete project openshift-keda",
"oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator |
7.68. gnome-desktop | 7.68. gnome-desktop 7.68.1. RHBA-2012:1352 - gnome-desktop bug fix update Updated gnome-desktop packages that fix a bug are now available. The gnome-desktop package contains an internal library (libgnome-desktop) used to implement some portions of the GNOME desktop, and also some data files and other shared components of the GNOME user environment. Bug Fix BZ# 829891 Previously, when a user hit the system's hot-key (most commonly Fn+F7) to change display configurations, the system could potentially switch to an invalid mode, which would fail to display. With this update, gnome-desktop now selects valid XRandR modes and correctly switching displays with the hot-key works as expected. All users of gnome-desktop are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gnome-desktop |
1.10.4.3. EDIT MONITORING SCRIPTS Subsection | 1.10.4.3. EDIT MONITORING SCRIPTS Subsection Click on the MONITORING SCRIPTS link at the top of the page. The EDIT MONITORING SCRIPTS subsection allows the administrator to specify a send/expect string sequence to verify that the service for the virtual server is functional on each real server. It is also the place where the administrator can specify customized scripts to check services requiring dynamically changing data. Figure 1.38. The EDIT MONITORING SCRIPTS Subsection Sending Program For more advanced service verification, you can use this field to specify the path to a service-checking script. This function is especially helpful for services that require dynamically changing data, such as HTTPS or SSL. To use this function, you must write a script that returns a textual response, set it to be executable, and type the path to it in the Sending Program field. Note If an external program is entered in the Sending Program field, then the Send field is ignored. Send A string for the nanny daemon to send to each real server in this field. By default the send field is completed for HTTP. You can alter this value depending on your needs. If you leave this field blank, the nanny daemon attempts to open the port and assume the service is running if it succeeds. Only one send sequence is allowed in this field, and it can only contain printable, ASCII characters as well as the following escape characters: \n for new line. \r for carriage return. \t for tab. \ to escape the character which follows it. Expect The textual response the server should return if it is functioning properly. If you wrote your own sending program, enter the response you told it to send if it was successful. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s3-piranha-virtservs-ems-CSO |
Chapter 3. Installing the Red Hat JBoss Web Server 6.0 | Chapter 3. Installing the Red Hat JBoss Web Server 6.0 You can install the JBoss Web Server 6.0 on Red Hat Enterprise Linux or Microsoft Windows. For more information see the following sections of the installation guide: Installing JBoss Web Server on Red Hat Enterprise Linux from archive files Installing JBoss Web Server on Red Hat Enterprise Linux from RPM packages Installing JBoss Web Server on Microsoft Windows | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_5_release_notes/installing_the_red_hat_jboss_web_server_6_0 |
function::set_kernel_long | function::set_kernel_long Name function::set_kernel_long - Writes a long value to kernel memory Synopsis Arguments addr The kernel address to write the long to val The long which is to be written Description Writes the long value to a given kernel memory address. Reports an error when writing to the given address fails. Requires the use of guru mode (-g). | [
"set_kernel_long(addr:long,val:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-set-kernel-long |
13.8. Put Request Example | 13.8. Put Request Example The following is the coded request from a sample put request using Hot Rod: Table 13.43. Put Request Example Byte 0 1 2 3 4 5 6 7 8 0xA0 0x09 0x41 0x01 0x07 0x4D ('M') 0x79 ('y') 0x43 ('C') 16 0x61 ('a') 0x63 ('c') 0x68 ('h') 0x65 ('e') 0x00 0x03 0x00 0x00 24 0x00 0x05 0x48 ('H') 0x65 ('e') 0x6C ('l') 0x6C ('l') 0x6F ('o') 0x00 32 0x00 0x05 0x57 ('W') 0x6F ('o') 0x72 ('r') 0x6C ('l') 0x64 ('d') - The following table contains all header fields and their values for the example request: Table 13.44. Example Request Field Names and Values Field Name Byte Value Magic 0 0xA0 Version 2 0x41 Cache Name Length 4 0x07 Flag 12 0x00 Topology ID 14 0x00 Transaction ID 16 0x00 Key 18-22 'Hello' Max Idle 24 0x00 Value 26-30 'World' Message ID 1 0x09 Opcode 3 0x01 Cache Name 5-11 'MyCache' Client Intelligence 13 0x03 Transaction Type 15 0x00 Key Field Length 17 0x05 Lifespan 23 0x00 Value Field Length 25 0x05 The following is a coded response for the sample put request: Table 13.45. Coded Response for the Sample Put Request Byte 0 1 2 3 4 5 6 7 8 0xA1 0x09 0x01 0x00 0x00 - - - The following table contains all header fields and their values for the example response: Table 13.46. Example Response Field Names and Values Field Name Byte Value Magic 0 0xA1 Opcode 2 0x01 Topology Change Marker 4 0x00 Message ID 1 0x09 Status 3 0x00 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/put_request_example1 |
2.2. PowerTOP | 2.2. PowerTOP The introduction of the tickless kernel in Red Hat Enterprise Linux 7 allows the CPU to enter the idle state more frequently, reducing power consumption and improving power management. The PowerTOP tool identifies specific components of kernel and user-space applications that frequently wake up the CPU. PowerTOP was used in development to perform the audits that led to many applications being tuned in this release, reducing unnecessary CPU wake up by a factor of ten. Red Hat Enterprise Linux 7 comes with version 2.x of PowerTOP . This version is a complete rewrite of the 1.x code base. It features a clearer tab-based user interface and extensively uses the kernel "perf" infrastructure to give more accurate data. The power behavior of system devices is tracked and prominently displayed, so problems can be pinpointed quickly. More experimentally, the 2.x codebase includes a power estimation engine that can indicate how much power individual devices and processes are consuming. See Figure 2.1, "PowerTOP in Operation" . To install PowerTOP run, as root , the following command: To run PowerTOP , use, as root , the following command: PowerTOP can provide an estimate of the total power usage of the system and show individual power usage for each process, device, kernel work, timer, and interrupt handler. Laptops should run on battery power during this task. To calibrate the power estimation engine, run, as root , the following command: Calibration takes time. The process performs various tests, and will cycle through brightness levels and switch devices on and off. Let the process finish and do not interact with the machine during the calibration. When the calibration process is completed, PowerTOP starts as normal. Let it run for approximately an hour to collect data. When enough data is collected, power estimation figures will be displayed in the first column. If you are executing the command on a laptop, it should still be running on battery power so that all available data is presented. While it runs, PowerTOP gathers statistics from the system. In the Overview tab, you can view a list of the components that are either sending wake-ups to the CPU most frequently or are consuming the most power (see Figure 2.1, "PowerTOP in Operation" ). The adjacent columns display power estimation, how the resource is being used, wakeups per second, the classification of the component, such as process, device, or timer, and a description of the component. Wakeups per second indicates how efficiently the services or the devices and drivers of the kernel are performing. Less wakeups means less power is consumed. Components are ordered by how much further their power usage can be optimized. Tuning driver components typically requires kernel changes, which is beyond the scope of this document. However, userland processes that send wakeups are more easily managed. First, determine whether this service or application needs to run at all on this system. If not, simply deactivate it. To turn off an old System V service permanently, run: For more details about the process, run, as root , the following commands: If the trace looks like it is repeating itself, then it probably is a busy loop. Fixing such bugs typically requires a code change in that component. As seen in Figure 2.1, "PowerTOP in Operation" , total power consumption and the remaining battery life are displayed, if applicable. Below these is a short summary featuring total wakeups per second, GPU operations per second, and virtual filesystem operations per second. In the rest of the screen there is a list of processes, interrupts, devices and other resources sorted according to their utilization. If properly calibrated, a power consumption estimation for every listed item in the first column is shown as well. Use the Tab and Shift + Tab keys to cycle through tabs. In the Idle stats tab, use of C-states is shown for all processors and cores. In the Frequency stats tab, use of P-states including the Turbo mode (if applicable) is shown for all processors and cores. The longer the CPU stays in the higher C- or P-states, the better ( C4 being higher than C3 ). This is a good indication of how well the CPU usage has been optimized. Residency should ideally be 90% or more in the highest C- or P-state while the system is idle. The Device Stats tab provides similar information to the Overview tab but only for devices. The Tunables tab contains suggestions for optimizing the system for lower power consumption. Use the up and down keys to move through suggestions and the enter key to toggle the suggestion on and off. Figure 2.1. PowerTOP in Operation You can also generate HTML reports by running PowerTOP with the --html option. Replace the htmlfile.html parameter with the required name for the output file: By default PowerTOP takes measurements in 20 seconds intervals, you can change it with the --time option: For more information about PowerTOP , see PowerTOP's home page . PowerTOP can also be used in conjunction with the turbostat utility. It is a reporting tool that displays information about processor topology, frequency, idle power-state statistics, temperature, and power usage on Intel 64 processors. For more information about the turbostat utility, see the turbostat (8) man page, or read the Performance Tuning Guide . | [
"~]# yum install powertop",
"~]# powertop",
"~]# powertop --calibrate",
"~]# systemctl disable servicename.service",
"~]# ps -awux | grep processname ~]# strace -p processid",
"~]# powertop --html= htmlfile.html",
"~]# powertop --html= htmlfile.html --time= seconds"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/powertop |
Provisioning hosts | Provisioning hosts Red Hat Satellite 6.15 Configure provisioning resources and networking, provision physical machines, provision virtual machines on cloud providers or virtualization infrastructure, create hosts manually or by using the Discovery service Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/index |
4.8. Configuring Red Hat Satellite Errata Management for a Virtual Machine | 4.8. Configuring Red Hat Satellite Errata Management for a Virtual Machine In the Administration Portal, you can configure a virtual machine to display the available errata. The virtual machine needs to be associated with a Red Hat Satellite server to show available errata. Red Hat Virtualization 4.3 supports errata management with Red Hat Satellite 6.5. Prerequisites The host on which the virtual machine runs needs to be configured to receive errata information from Satellite. For more information, see Configuring Satellite Errata Management for a Host in the Administration Guide . The virtual machine must have the ovirt-guest-agent package installed. This package allows the virtual machine to report its host name to the Red Hat Virtualization Manager. This allows the Red Hat Satellite server to identify the virtual machine as a content host and report the applicable errata. For more information on installing the ovirt-guest-agent package see the following topics: Section 2.4.2, "Installing the Guest Agents and Drivers on Red Hat Enterprise Linux" for Red Hat Enterprise Linux virtual machines Section 3.3.2, "Installing the Guest Agents, Tools, and Drivers on Windows" for Windows virtual machines The virtual machine must be registered to the Satellite server as a content host and have the katello-agent package installed. For information on how to configure a host registration and how to register a host and install the katello-agent package see Registering Hosts in the Red Hat Satellite document Managing Hosts . Important Virtual machines are identified in the Satellite server by their FQDN. This ensures that an external content host ID does not need to be maintained in Red Hat Virtualization. Procedure To configure Red Hat Satellite errata management: Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Foreman/Satellite tab. Select the required Satellite server from the Provider drop-down list. Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/Configuring_Satellite_Errata |
Chapter 92. TgzArtifact schema reference | Chapter 92. TgzArtifact schema reference Used in: Plugin Property Property type Description url string URL of the artifact which will be downloaded. Streams for Apache Kafka does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. sha512sum string SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. type string Must be tgz . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-TgzArtifact-reference |
Chapter 12. Resource Management | Chapter 12. Resource Management Control Groups Red Hat Enterprise Linux 7 features control groups, cgroups, which is a concept for organizing processes in a tree of named groups for the purpose of resource management. They provide a way to hierarchically group and label processes and a way to apply resource limits to these groups. In Red Hat Enterprise Linux 7, control groups are exclusively managed through systemd . Control groups are configured in systemd unit files and are managed with systemd's command line interface (CLI) tools. Control groups and other resource management features are discussed in detail in the Resource Management and Linux Containers Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-resource_management |
Chapter 17. Maintaining IdM Kerberos keytab files | Chapter 17. Maintaining IdM Kerberos keytab files Learn more about what Kerberos keytab files are and how Identity Management (IdM) uses them to allow services to authenticate securely with Kerberos. You can use this information to understand why you should protect these sensitive files, and to troubleshoot communication issues between IdM services. For more information, see the following topics: How Identity Management uses Kerberos keytab files Verifying that Kerberos keytab files are in sync with the IdM database List of IdM Kerberos keytab files and their contents Viewing the encryption type of your IdM master key . 17.1. How Identity Management uses Kerberos keytab files A Kerberos keytab is a file containing Kerberos principals and their corresponding encryption keys. Hosts, services, users, and scripts can use keytabs to authenticate to the Kerberos Key Distribution Center (KDC) securely, without requiring human interaction. Every IdM service on an IdM server has a unique Kerberos principal stored in the Kerberos database. For example, if IdM servers east.idm.example.com and west.idm.example.com provide DNS services, IdM creates 2 unique DNS Kerberos principals to identify these services, which follow the naming convention <service>/[email protected] : DNS/[email protected] DNS/[email protected] IdM creates a keytab on the server for each of these services to store a local copy of the Kerberos keys, along with their Key Version Numbers (KVNO). For example, the default keytab file /etc/krb5.keytab stores the host principal, which represents that machine in the Kerberos realm and is used for login authentication. The KDC generates encryption keys for the different encryption algorithms it supports, such as aes256-cts-hmac-sha1-96 and aes128-cts-hmac-sha1-96 . You can display the contents of a keytab file with the klist command: Additional resources Verifying that Kerberos keytab files are in sync with the IdM database List of IdM Kerberos keytab files and their contents 17.2. Verifying that Kerberos keytab files are in sync with the IdM database When you change a Kerberos password, IdM automatically generates a new corresponding Kerberos key and increments its Key Version Number (KVNO). If a Kerberos keytab is not updated with the new key and KVNO, any services that depend on that keytab to retrieve a valid key might not be able to authenticate to the Kerberos Key Distribution Center (KDC). If one of your IdM services cannot communicate with another service, use the following procedure to verify that your Kerberos keytab files are in sync with the keys stored in the IdM database. If they are out of sync, retrieve a Kerberos keytab with an updated key and KVNO. This example compares and retrieves an updated DNS principal for an IdM server. Prerequisites You must authenticate as the IdM admin account to retrieve keytab files You must authenticate as the root account to modify keytab files owned by other users Procedure Display the KVNO of the principals in the keytab you are verifying. In the following example, the /etc/named.keytab file has the key for the DNS/[email protected] principal with a KVNO of 2. Display the KVNO of the principal stored in the IdM database. In this example, the KVNO of the key in the IdM database does not match the KVNO in the keytab. Authenticate as the IdM admin account. Retrieve an updated Kerberos key for the principal and store it in its keytab. Perform this step as the root user so you can modify the /etc/named.keytab file, which is owned by the named user. Verification Display the updated KVNO of the principal in the keytab. Display the KVNO of the principal stored in the IdM database and ensure it matches the KVNO from the keytab. Additional resources How Identity Management uses Kerberos keytab files List of IdM Kerberos keytab files and their contents 17.3. List of IdM Kerberos keytab files and their contents The following table displays the location, contents, and purpose of the IdM Kerberos keytab files. Table 17.1. Table Keytab location Contents Purpose /etc/krb5.keytab host principal Verifying user credentials when logging in, used by NFS if there is no nfs principal /etc/dirsrv/ds.keytab ldap principal Authenticating users to the IdM database, securely replicating database contents between IdM replicas /var/lib/ipa/gssproxy/http.keytab HTTP principal Authenticating to the Apache server /etc/named.keytab DNS principal Securely updating DNS records /etc/ipa/dnssec/ipa-dnskeysyncd.keytab ipa-dnskeysyncd principal Keeping OpenDNSSEC synchronized with LDAP /etc/pki/pki-tomcat/dogtag.keytab dogtag principal Communicating with the Certificate Authority (CA) /etc/samba/samba.keytab cifs and host principals Communicating with the Samba service /var/lib/sss/keytabs/ ad-domain.com .keytab Active Directory (AD) domain controller (DCs) principals in the form [email protected] Communicating with AD DCs through an IdM-AD Trust Additional resources How Identity Management uses Kerberos keytab files Verifying that Kerberos keytab files are in sync with the IdM database 17.4. Viewing the encryption type of your IdM master key As an Identity Management (IdM) administrator, you can view the encryption type of your IdM master key, which is the key that the IdM Kerberos Distribution Center (KDC) uses to encrypt all other principals when storing them at rest. Knowing the encryption type helps you determine your deployment's compatibility with FIPS standards. As of RHEL 8.7, the encryption type is aes256-cts-hmac-sha384-192 . This encryption type is compatible with the default RHEL 9 FIPS cryptographic policy aiming to comply with FIPS 140-3. The encryption types used on RHEL versions are not compatible with RHEL 9 systems that adhere to FIPS 140-3 standards. To make RHEL 9 systems compatible with a RHEL 8 FIPS 140-2 deployment, see the AD Domain Users unable to login in to the FIPS-compliant environment KCS solution. Prerequisites You have root access to any of the RHEL 8 replicas in the IdM deployment. Procedure On the replica, view the encryption type on the command line: The aes256-cts-hmac-sha1-96 key in the output indicates that the IdM deployment was installed on a server that was running RHEL 8.6 or earlier. The presence of a aes256-cts-hmac-sha384-192 key in the output would indicate that the IdM deployment was installed on a server that was running RHEL 8.7 or later. | [
"klist -ekt /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 02/24/2022 20:28:09 host/[email protected] (aes256-cts-hmac-sha1-96) 2 02/24/2022 20:28:09 host/[email protected] (aes128-cts-hmac-sha1-96) 2 02/24/2022 20:28:09 host/[email protected] (camellia128-cts-cmac) 2 02/24/2022 20:28:09 host/[email protected] (camellia256-cts-cmac)",
"klist -ekt /etc/named.keytab Keytab name: FILE:/etc/named.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 11/26/2021 13:51:11 DNS/[email protected] (aes256-cts-hmac-sha1-96) 2 11/26/2021 13:51:11 DNS/[email protected] (aes128-cts-hmac-sha1-96) 2 11/26/2021 13:51:11 DNS/[email protected] (camellia128-cts-cmac) 2 11/26/2021 13:51:11 DNS/[email protected] (camellia256-cts-cmac)",
"kvno DNS/[email protected] DNS/[email protected]: kvno = 3",
"kinit admin Password for [email protected]:",
"ipa-getkeytab -s server1.idm.example.com -p DNS/server1.idm.example.com -k /etc/named.keytab",
"klist -ekt /etc/named.keytab Keytab name: FILE:/etc/named.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 4 08/17/2022 14:42:11 DNS/[email protected] (aes256-cts-hmac-sha1-96) 4 08/17/2022 14:42:11 DNS/[email protected] (aes128-cts-hmac-sha1-96) 4 08/17/2022 14:42:11 DNS/[email protected] (camellia128-cts-cmac) 4 08/17/2022 14:42:11 DNS/[email protected] (camellia256-cts-cmac)",
"kvno DNS/[email protected] DNS/[email protected]: kvno = 4",
"kadmin.local getprinc K/M | grep -E '^Key:' Key: vno 1, aes256-cts-hmac-sha1-96"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_maintaining-idm-kerberos-keytab-files_managing-users-groups-hosts |
Chapter 8. Upgrading AMQ Streams | Chapter 8. Upgrading AMQ Streams AMQ Streams on OpenShift can be upgraded to version 1.7 to take advantage of new features and enhancements, performance improvements, and security options. During this upgrade, you upgrade Kafka to the latest supported version. Each Kafka release introduces new features, improvements, and bug fixes to your AMQ Streams deployment. AMQ Streams can be downgraded to the version if you encounter issues with the newer version. Released versions of AMQ Streams are listed in the Product Downloads section of the Red Hat Customer Portal. Upgrade paths Two upgrade paths are possible: Incremental Upgrading AMQ Streams from the minor version to version 1.7. Multi-version Upgrading AMQ Streams from an old version to version 1.7 within a single upgrade (skipping one or more intermediate versions). For example, upgrading from AMQ Streams 1.5 directly to 1.7. Kafka version support The Kafka versions table lists the supported Kafka versions for AMQ Streams 1.7. In the table: The latest Kafka version is supported for production use. The Kafka version is supported only for the purpose of upgrading to AMQ Streams 1.7. Identify the Kafka version to upgrade to before you begin the upgrade procedures described in this chapter. Note You can upgrade to a higher Kafka version as long as it is supported by your version of AMQ Streams. In some cases, you can also downgrade to a supported Kafka version. Downtime and availability If topics are configured for high availability, upgrading AMQ Streams should not cause any downtime for consumers and producers that publish and read data from those topics. Highly available topics have a replication factor of at least 3 and partitions distributed evenly among the brokers. Upgrading AMQ Streams triggers rolling updates, where all brokers are restarted in turn, at different stages of the process. During rolling updates, not all brokers are online, so overall cluster availability is temporarily reduced. A reduction in cluster availability increases the chance that a broker failure will result in lost messages. 8.1. AMQ Streams and Kafka upgrades Upgrading AMQ Streams is a three-stage process. To upgrade brokers and clients without downtime, you must complete the upgrade procedures in the following order: Update your Cluster Operator to a new AMQ Streams version. The approach you take depends on how you deployed the Cluster Operator . If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in Upgrading the Cluster Operator . If you deployed the Cluster Operator from the OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the AMQ Streams Operators to a new AMQ Streams version. Depending on your chosen upgrade strategy, after updating the channel, either: An automatic upgrade is initiated A manual upgrade will require approval before the installation begins For more information on using the OperatorHub to upgrade Operators, see Upgrading installed Operators in the OpenShift documentation. Upgrade all Kafka brokers and client applications to the latest supported Kafka version. Section 8.1.3, "Upgrading Kafka" Section 8.1.5, "Strategies for upgrading clients" If applicable, perform the following tasks: Update existing custom resources to handle deprecated custom resource properties. Section 8.2, "AMQ Streams custom resource upgrades" Note Custom resources can also be updated before the Kafka upgrade. Update listeners to use the GenericKafkaListener schema Section 8.1.4, "Updating listeners to the generic listener configuration" Optional: incremental cooperative rebalance upgrade Consider upgrading consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances. Section 8.3, "Upgrading consumers to cooperative rebalancing" 8.1.1. Kafka versions Kafka's log message format version and inter-broker protocol version specify, respectively, the log format version appended to messages and the version of the Kafka protocol used in a cluster. To ensure the correct versions are used, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers). The following table shows the differences between Kafka versions: Kafka version Interbroker protocol version Log message format version ZooKeeper version 2.6.0 2.6 2.6 3.5.8 2.7.0 2.7 2.7 3.5.8 Inter-broker protocol version In Kafka, the network protocol used for inter-broker communication is called the inter-broker protocol . Each version of Kafka has a compatible version of the inter-broker protocol. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table. The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config . Log message format version When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the format they were encoded with. You can configure a Kafka broker to convert messages from newer format versions to a given older format version before the broker appends the message to the log. In Kafka, there are two different methods for setting the message format version: The message.format.version property is set on topics. The log.message.format.version property is set on Kafka brokers. The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration. The upgrade tasks in this section assume that the message format version is defined by the log.message.format.version . 8.1.2. Upgrading the Cluster Operator The steps to upgrade your Cluster Operator deployment to use AMQ Streams 1.7 are described in this section. Follow this procedure if you deployed the Cluster Operator using the installation YAML files rather than OperatorHub. The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation. Note Refer to the documentation supporting a specific version of AMQ Streams for information on how to upgrade to that version. 8.1.2.1. Upgrading the Cluster Operator This procedure describes how to upgrade a Cluster Operator deployment to use AMQ Streams 1.7. Prerequisites An existing Cluster Operator deployment is available. You have downloaded the release artifacts for AMQ Streams 1.7 . Procedure Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the new version of the Cluster Operator. Update your custom resources to reflect the supported configuration options available for AMQ Streams version 1.7. Update the Cluster Operator. Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in. On Linux, use: On MacOS, use: If you modified one or more environment variables in your existing Cluster Operator Deployment , edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables. When you have an updated configuration, deploy it along with the rest of the installation resources: oc replace -f install/cluster-operator Wait for the rolling updates to complete. If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns a "Version not found" error message. Otherwise, no error message is returned. For example: "Version 2.4.0 is not supported. Supported versions are: 2.6.0, 2.6.1, 2.7.0." If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version: Edit the Kafka custom resource. Change the spec.kafka.version property to a supported Kafka version. If the error message is not returned, go to the step. You will upgrade the Kafka version later. Get the image for the Kafka pod to ensure the upgrade was successful: oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The image tag shows the new Operator version. For example: registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:{ContainerVersion} Your Cluster Operator was upgraded to version 1.7 but the version of Kafka running in the cluster it manages is unchanged. Following the Cluster Operator upgrade, you must perform a Kafka upgrade . 8.1.3. Upgrading Kafka After you have upgraded your Cluster Operator to 1.7, the step is to upgrade all Kafka brokers to the latest supported version of Kafka. Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka brokers. The Cluster Operator initiates rolling updates based on the Kafka cluster configuration. If Kafka.spec.kafka.config contains... The Cluster Operator initiates... Both the inter.broker.protocol.version and the log.message.format.version . A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version . Changing each will trigger a further rolling update. Either the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. No configuration for the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper. A single rolling update occurs even if the ZooKeeper version is unchanged. Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version. Additional resources Section 8.1.2, "Upgrading the Cluster Operator" Section 8.1.1, "Kafka versions" 8.1.3.1. Kafka version and image mappings When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES environment variable and the Kafka.spec.kafka.version property. Each Kafka resource can be configured with a Kafka.spec.kafka.version . The Cluster Operator's STRIMZI_KAFKA_IMAGES environment variable provides a mapping between the Kafka version and the image to be used when that version is requested in a given Kafka resource. If Kafka.spec.kafka.image is not configured, the default image for the given version is used. If Kafka.spec.kafka.image is configured, the default image is overridden. Warning The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version. 8.1.3.2. Upgrading Kafka brokers and client applications This procedure describes how to upgrade a AMQ Streams Kafka cluster to the latest supported Kafka version. Compared to your current Kafka version, the new version might support a higher log message format version or inter-broker protocol version , or both. Follow the steps to upgrade these versions, if required. For more information, see Section 8.1.1, "Kafka versions" . You should also choose a strategy for upgrading clients . Kafka clients are upgraded in step 6 of this procedure. Prerequisites For the Kafka resource to be upgraded, check that: The Cluster Operator, which supports both versions of Kafka, is up and running. The Kafka.spec.kafka.config does not contain options that are not supported in the new Kafka version. Procedure Update the Kafka cluster configuration: oc edit kafka my-cluster If configured, ensure that Kafka.spec.kafka.config has the log.message.format.version and inter.broker.protocol.version set to the defaults for the current Kafka version. For example, if upgrading from Kafka version 2.6.0 to 2.7.0: kind: Kafka spec: # ... kafka: version: 2.6.0 config: log.message.format.version: "2.6" inter.broker.protocol.version: "2.6" # ... If log.message.format.version and inter.broker.protocol.version are not configured, AMQ Streams automatically updates these versions to the current defaults after the update to the Kafka version in the step. Note The value of log.message.format.version and inter.broker.protocol.version must be strings to prevent them from being interpreted as floating point numbers. Change the Kafka.spec.kafka.version to specify the new Kafka version; leave the log.message.format.version and inter.broker.protocol.version at the defaults for the current Kafka version. Note Changing the kafka.version ensures that all brokers in the cluster will be upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving the inter.broker.protocol.version unchanged ensures that the brokers can continue to communicate with each other throughout the upgrade. For example, if upgrading from Kafka 2.6.0 to 2.7.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 2.7.0 1 config: log.message.format.version: "2.6" 2 inter.broker.protocol.version: "2.6" 3 # ... 1 Kafka version is changed to the new version. 2 Message format version is unchanged. 3 Inter-broker protocol version is unchanged. Warning You cannot downgrade Kafka if the inter.broker.protocol.version for the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to __consumer_offsets . The downgraded cluster will not understand the messages. If the image for the Kafka cluster is defined in the Kafka custom resource, in Kafka.spec.kafka.image , update the image to point to a container image with the new Kafka version. See Kafka version and image mappings Save and exit the editor, then wait for rolling updates to complete. Check the progress of the rolling updates by watching the pod state transitions: oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka. Depending on your chosen strategy for upgrading clients , upgrade all client applications to use the new version of the client binaries. If required, set the version property for Kafka Connect and MirrorMaker as the new version of Kafka: For Kafka Connect, update KafkaConnect.spec.version . For MirrorMaker, update KafkaMirrorMaker.spec.version . For MirrorMaker 2.0, update KafkaMirrorMaker2.spec.version . If configured, update the Kafka resource to use the new inter.broker.protocol.version version. Otherwise, go to step 9. For example, if upgrading to Kafka 2.7.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 2.7.0 config: log.message.format.version: "2.6" inter.broker.protocol.version: "2.7" # ... Wait for the Cluster Operator to update the cluster. If configured, update the Kafka resource to use the new log.message.format.version version. Otherwise, go to step 10. For example, if upgrading to Kafka 2.7.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 2.7.0 config: log.message.format.version: "2.7" inter.broker.protocol.version: "2.7" # ... Wait for the Cluster Operator to update the cluster. The Kafka cluster and clients are now using the new Kafka version. The brokers are configured to send messages using the inter-broker protocol version and message format version of the new version of Kafka. Following the Kafka upgrade, if required, you can: Update listeners to the GenericKafkaListener schema Upgrade consumers to use the incremental cooperative rebalance protocol Update existing custom resources 8.1.4. Updating listeners to the generic listener configuration AMQ Streams provides a GenericKafkaListener schema for the configuration of Kafka listeners in a Kafka resource. GenericKafkaListener replaces the KafkaListeners schema, which has been removed from AMQ Streams. With the GenericKafkaListener schema, you can configure as many listeners as required, as long as their names and ports are unique. The listeners configuration is defined as an array, but the deprecated format is also supported. For clients inside the OpenShift cluster, you can create plain (without encryption) or tls internal listeners. For clients outside the OpenShift cluster, you create external listeners and specify a connection mechanism, which can be nodeport , loadbalancer , ingress or route . The KafkaListeners schema used sub-properties for plain , tls and external listeners, with fixed ports for each. At any stage in the upgrade process, you must convert listeners configured using the KafkaListeners schema into the format of the GenericKafkaListener schema. For example, if you are currently using the following configuration in your Kafka configuration: Old listener configuration listeners: plain: # ... tls: # ... external: type: loadbalancer # ... Convert the listeners into the new format using: New listener configuration listeners: #... - name: plain port: 9092 type: internal tls: false 1 - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 2 tls: true 1 The TLS property is now required for all listeners. 2 Options: ingress , loadbalancer , nodeport , route . Make sure to use the exact names and port numbers shown. For any additional configuration or overrides properties used with the old format, you need to update them to the new format. Changes introduced to the listener configuration : overrides is merged with the configuration section dnsAnnotations has been renamed annotations preferredAddressType has been renamed preferredNodePortAddressType address has been renamed alternativeNames loadBalancerSourceRanges and externalTrafficPolicy move to the listener configuration from the now deprecated template For example, this configuration: Old additional listener configuration listeners: external: type: loadbalancer authentication: type: tls overrides: bootstrap: dnsAnnotations: #... Changes to: New additional listener configuration listeners: #... - name: external port: 9094 type:loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: #... Important The name and port numbers shown in the new listener configuration must be used for backwards compatibility. Using any other values will cause renaming of the Kafka listeners and OpenShift services. For more information on the configuration options available for each type of listener, see the GenericKafkaListener schema reference . 8.1.5. Strategies for upgrading clients The right approach to upgrading your client applications (including Kafka Connect connectors) depends on your particular circumstances. Consuming applications need to receive messages in a message format that they understand. You can ensure that this is the case in one of two ways: By upgrading all the consumers for a topic before upgrading any of the producers. By having the brokers down-convert messages to an older format. Using broker down-conversion puts extra load on the brokers, so it is not ideal to rely on down-conversion for all topics for a prolonged period of time. For brokers to perform optimally they should not be down converting messages at all. Broker down-conversion is configured in two ways: The topic-level message.format.version configures it for a single topic. The broker-level log.message.format.version is the default for topics that do not have the topic-level message.format.version configured. Messages published to a topic in a new-version format will be visible to consumers, because brokers perform down-conversion when they receive messages from producers, not when they are sent to consumers. There are a number of strategies you can use to upgrade your clients: Consumers first Upgrade all the consuming applications. Change the broker-level log.message.format.version to the new version. Upgrade all the producing applications. This strategy is straightforward, and avoids any broker down-conversion. However, it assumes that all consumers in your organization can be upgraded in a coordinated way, and it does not work for applications that are both consumers and producers. There is also a risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log so that you cannot revert to the consumer version. Per-topic consumers first For each topic: Upgrade all the consuming applications. Change the topic-level message.format.version to the new version. Upgrade all the producing applications. This strategy avoids any broker down-conversion, and means you can proceed on a topic-by-topic basis. It does not work for applications that are both consumers and producers of the same topic. Again, it has the risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log. Per-topic consumers first, with down conversion For each topic: Change the topic-level message.format.version to the old version (or rely on the topic defaulting to the broker-level log.message.format.version ). Upgrade all the consuming and producing applications. Verify that the upgraded applications function correctly. Change the topic-level message.format.version to the new version. This strategy requires broker down-conversion, but the load on the brokers is minimized because it is only required for a single topic (or small group of topics) at a time. It also works for applications that are both consumers and producers of the same topic. This approach ensures that the upgraded producers and consumers are working correctly before you commit to using the new message format version. The main drawback of this approach is that it can be complicated to manage in a cluster with many topics and applications. Other strategies for upgrading client applications are also possible. Note It is also possible to apply multiple strategies. For example, for the first few applications and topics the "per-topic consumers first, with down conversion" strategy can be used. When this has proved successful another, more efficient strategy can be considered acceptable to use instead. 8.2. AMQ Streams custom resource upgrades After you have upgraded AMQ Streams to 1.7, you must ensure that your custom resources are using API version v1beta2 . You can do this any time after upgrading to 1.7, but the upgrades must be completed before the AMQ Streams minor version update. Important Upgrade of the custom resources to v1beta2 must be performed after upgrading the Cluster Operator , so the Cluster Operator can understand the resources. Note Upgrade of the custom resources to v1beta2 prepares AMQ Streams for a move to OpenShift CRD v1 , which is required for Kubernetes v1.22. CLI upgrades to custom resources AMQ Streams provides an API conversion tool with its release artifacts. You can download its ZIP or TAR.GZ from AMQ Streams download site . To use the tool, extract it and use the scripts in the bin directory. From its CLI, you can then use the tool to convert the format of your custom resources to v1beta2 in one of two ways: Section 8.2.2, "Converting custom resources configuration files using the API conversion tool" Section 8.2.3, "Converting custom resources directly using the API conversion tool" After the conversion of your custom resources, you must set v1beta2 as the storage API version in your CRDs: Section 8.2.4, "Upgrading CRDs to v1beta2 using the API conversion tool" Manual upgrades to custom resources Instead of using the API conversion tool to update custom resources to v1beta2 , you can manually update each custom resource to use v1beta2 : Update the Kafka custom resource, including the configurations for the other components: Section 8.2.5, "Upgrading Kafka resources to support v1beta2" Section 8.2.6, "Upgrading ZooKeeper to support v1beta2" Section 8.2.7, "Upgrading the Topic Operator to support v1beta2" Section 8.2.8, "Upgrading the Entity Operator to support v1beta2" Section 8.2.9, "Upgrading Cruise Control to support v1beta2" (if Cruise Control is deployed) Section 8.2.10, "Upgrading the API version of Kafka resources to v1beta2" Update the other custom resources that apply to your deployment: Section 8.2.11, "Upgrading Kafka Connect resources to v1beta2" Section 8.2.12, "Upgrading Kafka Connect S2I resources to v1beta2" Section 8.2.13, "Upgrading Kafka MirrorMaker resources to v1beta2" Section 8.2.14, "Upgrading Kafka MirrorMaker 2.0 resources to v1beta2" Section 8.2.15, "Upgrading Kafka Bridge resources to v1beta2" Section 8.2.16, "Upgrading Kafka User resources to v1beta2" Section 8.2.17, "Upgrading Kafka Topic resources to v1beta2" Section 8.2.18, "Upgrading Kafka Connector resources to v1beta2" Section 8.2.19, "Upgrading Kafka Rebalance resources to v1beta2" The manual procedures show the changes that are made to each custom resource. After these changes, you must use the API conversion tool to upgrade your CRDs. 8.2.1. API versioning Custom resources are edited and controlled using APIs added to OpenShift by CRDs. Put another way, CRDs extend the Kubernetes API to allow the creation of custom resources. CRDs are themselves resources within OpenShift. They are installed in an OpenShift cluster to define the versions of API for the custom resource. Each version of the custom resource API can define its own schema for that version. OpenShift clients, including the AMQ Streams Operators, access the custom resources served by the Kubernetes API server using a URL path ( API path ), which includes the API version. The introduction of v1beta2 updates the schemas of the custom resources. Older API versions are deprecated. The v1alpha1 API version is deprecated for the following AMQ Streams custom resources: Kafka KafkaConnect KafkaConnectS2I KafkaConnector KafkaMirrorMaker KafkaMirrorMaker2 KafkaTopic KafkaUser KafkaBridge KafkaRebalance The v1beta1 API version is deprecated for the following AMQ Streams custom resources: Kafka KafkaConnect KafkaConnectS2I KafkaMirrorMaker KafkaTopic KafkaUser Important The v1alpha1 and v1beta1 versions will be removed in the minor release. Additional resources Extend the Kubernetes API with CustomResourceDefinitions 8.2.2. Converting custom resources configuration files using the API conversion tool This procedure describes how to use the API conversion tool to convert YAML files describing the configuration for AMQ Streams custom resources into a format applicable to v1beta2 . To do so, you use the convert-file ( cf ) command. The convert-file command can convert YAML files containing multiple documents. For a multi-document YAML file, all the AMQ Streams custom resources it contains are converted. Any non-AMQ Streams OpenShift resources are replicated unmodified in the converted output file. After you have converted the YAML file, you must apply the configuration to update the custom resource in the cluster. Alternatively, if the GitOps synchronization mechanism is being used for updates on your cluster, you can use it to apply the changes. The conversion is only complete when the custom resource is updated in the OpenShift cluster. Alternatively, you can use the convert-resource procedure to convert custom resources directly . Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. The API conversion tool, which is provided with the release artifacts. The tool requires Java 11. Use the CLI help for more information on the API conversion tool, and the flags available for the convert-file command: bin/api-conversion.sh help bin/api-conversion.sh help convert-file Use bin/api-conversion.cmd for this procedure if you are using Windows. Table 8.1. Flags for YAML file conversion Flag Description -f , --file= NAME-OF-YAML-FILE Specifies the YAML file for the AMQ Streams custom resource being converted -o, --output= NAME-OF-CONVERTED-YAML-FILE Creates an output YAML file for the converted custom resource --in-place Updates the original source file with the converted YAML Procedure Run the API conversion tool with the convert-file command and appropriate flags. Example 1, converts a YAML file and displays the output, though the file does not change: bin/api-conversion.sh convert-file --file input.yaml Example 2, converts a YAML file, and writes the changes into the original source file: bin/api-conversion.sh convert-file --file input.yaml --in-place Example 3, converts a YAML file, and writes the changes into a new output file: bin/api-conversion.sh convert-file --file input.yaml --output output.yaml Update the custom resources using the converted configuration file. oc apply -f CONVERTED-CONFIG-FILE Verify that the custom resources have been converted. oc get KIND CUSTOM-RESOURCE-NAME -o yaml 8.2.3. Converting custom resources directly using the API conversion tool This procedure describes how to use the API conversion tool to convert AMQ Streams custom resources directly in the OpenShift cluster into a format applicable to v1beta2 . To do so, you use the convert-resource ( cr ) command. The command uses Kubernetes APIs to make the conversions. You can specify one or more of types of AMQ Streams custom resources, based on the kind property, or you can convert all types. You can also target a specific namespace or all namespaces for conversion. When targeting a namespace, you can convert all custom resources in that namespace, or convert a single custom resource by specifying its name and kind. Alternatively, you can use the convert-file procedure to convert and apply the YAML files describing the custom resources . Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. The API conversion tool, which is provided with the release artifacts. The tool requires Java 11 (OpenJDK). The steps require a user admin account with RBAC permission to: Get the AMQ Streams custom resources being converted using the --name option List the AMQ Streams custom resources being converted without using the --name option Replace the AMQ Streams custom resources being converted Use the CLI help for more information on the API conversion tool, and the flags available for the convert-resource command: bin/api-conversion.sh help bin/api-conversion.sh help convert-resource Use bin/api-conversion.cmd for this procedure if you are using Windows. Table 8.2. Flags for converting custom resources Flag Description -k , --kind Specifies the kinds of custom resources to be converted, or converts all resources if not specified -a , --all-namespaces Converts custom resources in all namespaces -n , --namespace Specifies an OpenShift namespace or OpenShift project, or uses the current namespace if not specified --name If --namespace and a single custom resource --kind is used, specifies the name of the custom resource being converted Procedure Run the API conversion tool with the convert-resource command and appropriate flags. Example 1, converts all AMQ Streams resources in current namespace: bin/api-conversion.sh convert-resource Example 2, converts all AMQ Streams resources in all namespaces: bin/api-conversion.sh convert-resource --all-namespaces Example 3, converts all AMQ Streams resources in the my-kafka namespace: bin/api-conversion.sh convert-resource --namespace my-kafka Example 4, converts only Kafka resources in all namespaces: bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka Example 5, converts Kafka and Kafka Connect resources in all namespaces: bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka --kind KafkaConnect Example 6, converts a Kafka custom resource named my-cluster in the my-kafka namespace: bin/api-conversion.sh convert-resource --kind Kafka --namespace my-kafka --name my-cluster Verify that the custom resources have been converted. oc get KIND CUSTOM-RESOURCE-NAME -o yaml 8.2.4. Upgrading CRDs to v1beta2 using the API conversion tool This procedure describes how to use the API conversion tool to convert the CRDs that define the schemas used to instantiate and manage AMQ Streams-specific resources in a format applicable to v1beta2 . To do so, you use the crd-upgrade command. Perform this procedure after converting all AMQ Streams custom resources in the whole OpenShift cluster to v1beta2 . If you upgrade your CRDs first, and then convert your custom resources, you will need to run this command again. The command updates spec.versions in the CRDs to declare v1beta2 as the storage API version. The command also updates custom resources so they are stored under v1beta2 . New custom resource instances are created from the specification of the storage API version, so only one API version is ever marked as the storage version. When you have upgraded the CRDs to use v1beta2 as the storage version, you should only use v1beta2 properties in your custom resources. Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. The API conversion tool, which is provided with the release artifacts. The tool requires Java 11 (OpenJDK). Custom resources have been converted to v1beta2 . The steps require a user admin account with RBAC permission to: List the AMQ Streams custom resources in all namespaces Replace the AMQ Streams custom resources being converted Update CRDs Replace the status of the CRDs Use the CLI help for more information on the API conversion tool: bin/api-conversion.sh help Use bin/api-conversion.cmd for this procedure if you are using Windows. Procedure If you have not done so, convert your custom resources to use v1beta2 . You can use the API conversion tool to do this in one of two ways: Section 8.2.2, "Converting custom resources configuration files using the API conversion tool" Section 8.2.3, "Converting custom resources directly using the API conversion tool" Or you can make the changes manually. Run the API conversion tool with the crd-upgrade command. bin/api-conversion.sh crd-upgrade Verify that the CRDs have been upgraded so that v1beta2 is the storage version. For example, for the Kafka topic CRD: apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: name: kafkatopics.kafka.strimzi.io #... spec: group: kafka.strimzi.io #... versions: - name: v1beta2 served: true storage: true #... status: #... storedVersions: - v1beta2 8.2.5. Upgrading Kafka resources to support v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Procedure Perform the following steps for each Kafka custom resource in your deployment. Update the Kafka custom resource in an editor. oc edit kafka KAFKA-CLUSTER If you have not already done so, update .spec.kafka.listener to the new generic listener format, as described in Section 8.1.4, "Updating listeners to the generic listener configuration" . Warning The old listener format is not supported in API version v1beta2 . If present, move affinity from .spec.kafka.affinity to .spec.kafka.template.pod.affinity . If present, move tolerations from .spec.kafka.tolerations to .spec.kafka.template.pod.tolerations . If present, remove .spec.kafka.template.tlsSidecarContainer . If present, remove .spec.kafka.tlsSidecarContainer . If either of the following policy configurations exist: .spec.kafka.template.externalBootstrapService.externalTrafficPolicy .spec.kafka.template.perPodService.externalTrafficPolicy Move the configuration to .spec.kafka.listeners[].configuration.externalTrafficPolicy , for both type: loadbalancer and type: nodeport listeners. Remove .spec.kafka.template.externalBootstrapService.externalTrafficPolicy or .spec.kafka.template.perPodService.externalTrafficPolicy . If either of the following loadbalancer listener configurations exist: .spec.kafka.template.externalBootstrapService.loadBalancerSourceRanges .spec.kafka.template.perPodService.loadBalancerSourceRanges Move the configuration to .spec.kafka.listeners[].configuration.loadBalancerSourceRanges , for type: loadbalancer listeners. Remove .spec.kafka.template.externalBootstrapService.loadBalancerSourceRanges or .spec.kafka.template.perPodService.loadBalancerSourceRanges . If type: external logging is configured in .spec.kafka.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties If the .spec.kafka.metrics field is used to enable metrics: Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the .spec.kafka.metrics field. kind: ConfigMap apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: kafka-metrics-config.yaml: | <YAML> Add a .spec.kafka.metricsConfig property that points to the ConfigMap and key: metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: kafka-metrics-config.yaml Delete the old .spec.kafka.metrics field. Save the file, exit the editor and wait for the updated custom resource to be reconciled. What to do For each Kafka custom resource, upgrade the configurations for ZooKeeper, Topic Operator, Entity Operator, and Cruise Control (if deployed) to support version v1beta2 . This is described in the following procedures. When all Kafka configurations are updated to support v1beta2 , you can upgrade the Kafka custom resource to v1beta2 . 8.2.6. Upgrading ZooKeeper to support v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Procedure Perform the following steps for each Kafka custom resource in your deployment. Update the Kafka custom resource in an editor. oc edit kafka KAFKA-CLUSTER If present, move affinity from .spec.zookeeper.affinity to .spec.zookeeper.template.pod.affinity . If present, move tolerations from .spec.zookeeper.tolerations to .spec.zookeeper.template.pod.tolerations . If present, remove .spec.zookeeper.template.tlsSidecarContainer . If present, remove .spec.zookeeper.tlsSidecarContainer . If type: external logging is configured in .spec.kafka.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties If the .spec.zookeeper.metrics field is used to enable metrics: Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the .spec.zookeeper.metrics field. kind: ConfigMap apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: zookeeper-metrics-config.yaml: | <YAML> Add a .spec.zookeeper.metricsConfig property that points to the ConfigMap and key: metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: zookeeper-metrics-config.yaml Delete the old .spec.zookeeper.metrics field. Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.7. Upgrading the Topic Operator to support v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Procedure Perform the following steps for each Kafka custom resource in your deployment. Update the Kafka custom resource in an editor. oc edit kafka KAFKA-CLUSTER If Kafka.spec.topicOperator is used: Move affinity from .spec.topicOperator.affinity to .spec.entityOperator.template.pod.affinity . Move tolerations from .spec.topicOperator.tolerations to .spec.entityOperator.template.pod.tolerations . Move .spec.topicOperator.tlsSidecar to .spec.entityOperator.tlsSidecar . After moving affinity , tolerations , and tlsSidecar , move the remaining configuration in .spec.topicOperator to .spec.entityOperator.topicOperator . If type: external logging is configured in .spec.topicOperator.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties Note You can also complete this step as part of the Entity Operator upgrade . Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.8. Upgrading the Entity Operator to support v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Kafka.spec.entityOperator is configured, as described in Section 8.2.7, "Upgrading the Topic Operator to support v1beta2" . Procedure Perform the following steps for each Kafka custom resource in your deployment. Update the Kafka custom resource in an editor. oc edit kafka KAFKA-CLUSTER Move affinity from .spec.entityOperator.affinity to .spec.entityOperator.template.pod.affinity . Move tolerations from .spec.entityOperator.tolerations to .spec.entityOperator.template.pod.tolerations . If type: external logging is configured in .spec.entityOperator.userOperator.logging or .spec.entityOperator.topicOperator.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.9. Upgrading Cruise Control to support v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Cruise Control is configured and deployed. See Deploying Cruise Control in the Using AMQ Streams on OpenShift guide. Procedure Perform the following steps for each Kafka.spec.cruiseControl configuration in your Kafka cluster. Update the Kafka custom resource in an editor. oc edit kafka KAFKA-CLUSTER If type: external logging is configured in .spec.cruiseControl.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties If the .spec.cruiseControl.metrics field is used to enable metrics: Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the .spec.cruiseControl.metrics field. kind: ConfigMap apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: cruise-control-metrics-config.yaml: | <YAML> Add a .spec.cruiseControl.metricsConfig property that points to the ConfigMap and key: metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: cruise-control-metrics-config.yaml Delete the old .spec.cruiseControl.metrics field. Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.10. Upgrading the API version of Kafka resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. You have updated the following configurations within the Kafka custom resource: ZooKeeper Topic Operator Entity Operator Cruise Control (if Cruise Control is deployed) Procedure Perform the following steps for each Kafka custom resource in your deployment. Update the Kafka custom resource in an editor. oc edit kafka KAFKA-CLUSTER Update the apiVersion of the Kafka custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1beta1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.11. Upgrading Kafka Connect resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Procedure Perform the following steps for each KafkaConnect custom resource in your deployment. Update the KafkaConnect custom resource in an editor. oc edit kafkaconnect KAFKA-CONNECT-CLUSTER If present, move: KafkaConnect.spec.affinity KafkaConnect.spec.tolerations to: KafkaConnect.spec.template.pod.affinity KafkaConnect.spec.template.pod.tolerations For example, move: spec: # ... affinity: # ... tolerations: # ... to: spec: # ... template: pod: affinity: # ... tolerations: # ... If type: external logging is configured in .spec.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties If the .spec.metrics field is used to enable metrics: Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the .spec.metrics field. kind: ConfigMap apiVersion: v1 metadata: name: kafka-connect-metrics labels: app: strimzi data: connect-metrics-config.yaml: | <YAML> Add a .spec.metricsConfig property that points to the ConfigMap and key: metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-connect-metrics key: connect-metrics-config.yaml Delete the old .spec.metrics field. Update the apiVersion of the KafkaConnect custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1beta1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.12. Upgrading Kafka Connect S2I resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Procedure Perform the following steps for each KafkaConnectS2I custom resource in your deployment. Update the KafkaConnectS2I custom resource in an editor. oc edit kafkaconnects2i S2I-CLUSTER If present, move: KafkaConnectS2I.spec.affinity KafkaConnectS2I.spec.tolerations to: KafkaConnectS2I.spec.template.pod.affinity KafkaConnectS2I.spec.template.pod.tolerations For example, move: spec: # ... affinity: # ... tolerations: # ... to: spec: # ... template: pod: affinity: # ... tolerations: # ... If type: external logging is configured in .spec.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties If the .spec.metrics field is used to enable metrics: Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the .spec.metrics field. kind: ConfigMap apiVersion: v1 metadata: name: kafka-connect-s2i-metrics labels: app: strimzi data: connect-s2i-metrics-config.yaml: | <YAML> Add a .spec.metricsConfig property that points to the ConfigMap and key: metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-connect-s2i-metrics key: connect-s2i-metrics-config.yaml Delete the old .spec.metrics field Update the apiVersion of the KafkaConnectS2I custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1beta1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.13. Upgrading Kafka MirrorMaker resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. MirrorMaker is configured and deployed. See Section 5.3.1, "Deploying Kafka MirrorMaker to your OpenShift cluster" . Procedure Perform the following steps for each KafkaMirrorMaker custom resource in your deployment. Update the KafkaMirrorMaker custom resource in an editor. oc edit kafkamirrormaker MIRROR-MAKER If present, move: KafkaMirrorMaker.spec.affinity KafkaMirrorMaker.spec.tolerations to: KafkaMirrorMaker.spec.template.pod.affinity KafkaMirrorMaker.spec.template.pod.tolerations For example, move: spec: # ... affinity: # ... tolerations: # ... to: spec: # ... template: pod: affinity: # ... tolerations: # ... If type: external logging is configured in .spec.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties If the .spec.metrics field is used to enable metrics: Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the .spec.metrics field. kind: ConfigMap apiVersion: v1 metadata: name: kafka-mm-metrics labels: app: strimzi data: mm-metrics-config.yaml: | <YAML> Add a .spec.metricsConfig property that points to the ConfigMap and key: metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-mm-metrics key: mm-metrics-config.yaml Delete the old .spec.metrics field. Update the apiVersion of the KafkaMirrorMaker custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1beta1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.14. Upgrading Kafka MirrorMaker 2.0 resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. MirrorMaker 2.0 is configured and deployed. See Section 5.3.1, "Deploying Kafka MirrorMaker to your OpenShift cluster" . Procedure Perform the following steps for each KafkaMirrorMaker2 custom resource in your deployment. Update the KafkaMirrorMaker2 custom resource in an editor. oc edit kafkamirrormaker2 MIRROR-MAKER-2 If present, move affinity from .spec.affinity to .spec.template.pod.affinity . If present, move tolerations from .spec.tolerations to .spec.template.pod.tolerations . If type: external logging is configured in .spec.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties If the .spec.metrics field is used to enable metrics: Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the .spec.metrics field. kind: ConfigMap apiVersion: v1 metadata: name: kafka-mm2-metrics labels: app: strimzi data: mm2-metrics-config.yaml: | <YAML> Add a .spec.metricsConfig property that points to the ConfigMap and key: metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-mm2-metrics key: mm2-metrics-config.yaml Delete the old .spec.metrics field. Update the apiVersion of the KafkaMirrorMaker2 custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1alpha1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.15. Upgrading Kafka Bridge resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. The Kafka Bridge is configured and deployed. See Section 5.4.1, "Deploying Kafka Bridge to your OpenShift cluster" . Procedure Perform the following steps for each KafkaBridge resource in your deployment. Update the KafkaBridge custom resource in an editor. oc edit kafkabridge KAFKA-BRIDGE If type: external logging is configured in KafkaBridge.spec.logging : Replace the name of the ConfigMap containing the logging configuration: logging: type: external name: my-config-map With the valueFrom.configMapKeyRef field, and specify both the ConfigMap name and the key under which the logging is stored: logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties Update the apiVersion of the KafkaBridge custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1alpha1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.16. Upgrading Kafka User resources to v1beta2 Prerequisites A User Operator supporting the v1beta2 API version is up and running. Procedure Perform the following steps for each KafkaUser custom resource in your deployment. Update the KafkaUser custom resource in an editor. oc edit kafkauser KAFKA-USER Update the apiVersion of the KafkaUser custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1beta1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.17. Upgrading Kafka Topic resources to v1beta2 Prerequisites A Topic Operator supporting the v1beta2 API version is up and running. Procedure Perform the following steps for each KafkaTopic custom resource in your deployment. Update the KafkaTopic custom resource in an editor. oc edit kafkatopic KAFKA-TOPIC Update the apiVersion of the KafkaTopic custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1beta1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.18. Upgrading Kafka Connector resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. KafkaConnector custom resources are deployed to manage connector instances. See Section 5.2.4, "Creating and managing connectors" . Procedure Perform the following steps for each KafkaConnector custom resource in your deployment. Update the KafkaConnector custom resource in an editor. oc edit kafkaconnector KAFKA-CONNECTOR Update the apiVersion of the KafkaConnector custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1alpha1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.2.19. Upgrading Kafka Rebalance resources to v1beta2 Prerequisites A Cluster Operator supporting the v1beta2 API version is up and running. Cruise Control is configured and deployed. See Deploying Cruise Control in the Using AMQ Streams on OpenShift guide. Procedure Perform the following steps for each KafkaRebalance custom resource in your deployment. Update the KafkaRebalance custom resource in an editor. oc edit kafkarebalance KAFKA-REBALANCE Update the apiVersion of the KafkaRebalance custom resource to v1beta2 : Replace: apiVersion: kafka.strimzi.io/v1alpha1 with: apiVersion: kafka.strimzi.io/v1beta2 Save the file, exit the editor and wait for the updated custom resource to be reconciled. 8.3. Upgrading consumers to cooperative rebalancing You can upgrade Kafka consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances instead of the default eager rebalance protocol. The new protocol was added in Kafka 2.4.0. Consumers keep their partition assignments in a cooperative rebalance and only revoke them at the end of the process, if needed to achieve a balanced cluster. This reduces the unavailability of the consumer group or Kafka Streams application. Note Upgrading to the incremental cooperative rebalance protocol is optional. The eager rebalance protocol is still supported. Prerequisites You have upgraded Kafka brokers and client applications to Kafka 2.7.0. Procedure To upgrade a Kafka consumer to use the incremental cooperative rebalance protocol: Replace the Kafka clients .jar file with the new version. In the consumer configuration, append cooperative-sticky to the partition.assignment.strategy . For example, if the range strategy is set, change the configuration to range, cooperative-sticky . Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart. Reconfigure each consumer in the group by removing the earlier partition.assignment.strategy from the consumer configuration, leaving only the cooperative-sticky strategy. Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart. To upgrade a Kafka Streams application to use the incremental cooperative rebalance protocol: Replace the Kafka Streams .jar file with the new version. In the Kafka Streams configuration, set the upgrade.from configuration parameter to the Kafka version you are upgrading from (for example, 2.3). Restart each of the stream processors (nodes) in turn. Remove the upgrade.from configuration parameter from the Kafka Streams configuration. Restart each consumer in the group in turn. Additional resources Notable changes in 2.4.0 in the Apache Kafka documentation. | [
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"replace -f install/cluster-operator",
"\"Version 2.4.0 is not supported. Supported versions are: 2.6.0, 2.6.1, 2.7.0.\"",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:{ContainerVersion}",
"edit kafka my-cluster",
"kind: Kafka spec: # kafka: version: 2.6.0 config: log.message.format.version: \"2.6\" inter.broker.protocol.version: \"2.6\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 2.7.0 1 config: log.message.format.version: \"2.6\" 2 inter.broker.protocol.version: \"2.6\" 3 #",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 2.7.0 config: log.message.format.version: \"2.6\" inter.broker.protocol.version: \"2.7\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 2.7.0 config: log.message.format.version: \"2.7\" inter.broker.protocol.version: \"2.7\" #",
"listeners: plain: # tls: # external: type: loadbalancer #",
"listeners: # - name: plain port: 9092 type: internal tls: false 1 - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 2 tls: true",
"listeners: external: type: loadbalancer authentication: type: tls overrides: bootstrap: dnsAnnotations: #",
"listeners: # - name: external port: 9094 type:loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: #",
"bin/api-conversion.sh help bin/api-conversion.sh help convert-file",
"bin/api-conversion.sh convert-file --file input.yaml",
"bin/api-conversion.sh convert-file --file input.yaml --in-place",
"bin/api-conversion.sh convert-file --file input.yaml --output output.yaml",
"apply -f CONVERTED-CONFIG-FILE",
"get KIND CUSTOM-RESOURCE-NAME -o yaml",
"bin/api-conversion.sh help bin/api-conversion.sh help convert-resource",
"bin/api-conversion.sh convert-resource",
"bin/api-conversion.sh convert-resource --all-namespaces",
"bin/api-conversion.sh convert-resource --namespace my-kafka",
"bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka",
"bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka --kind KafkaConnect",
"bin/api-conversion.sh convert-resource --kind Kafka --namespace my-kafka --name my-cluster",
"get KIND CUSTOM-RESOURCE-NAME -o yaml",
"bin/api-conversion.sh help",
"bin/api-conversion.sh crd-upgrade",
"apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: name: kafkatopics.kafka.strimzi.io # spec: group: kafka.strimzi.io # versions: - name: v1beta2 served: true storage: true # status: # storedVersions: - v1beta2",
"edit kafka KAFKA-CLUSTER",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties",
"kind: ConfigMap apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: kafka-metrics-config.yaml: | <YAML>",
"metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: kafka-metrics-config.yaml",
"edit kafka KAFKA-CLUSTER",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties",
"kind: ConfigMap apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: zookeeper-metrics-config.yaml: | <YAML>",
"metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: zookeeper-metrics-config.yaml",
"edit kafka KAFKA-CLUSTER",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties",
"edit kafka KAFKA-CLUSTER",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties",
"edit kafka KAFKA-CLUSTER",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties",
"kind: ConfigMap apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: cruise-control-metrics-config.yaml: | <YAML>",
"metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: cruise-control-metrics-config.yaml",
"edit kafka KAFKA-CLUSTER",
"apiVersion: kafka.strimzi.io/v1beta1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkaconnect KAFKA-CONNECT-CLUSTER",
"KafkaConnect.spec.affinity",
"KafkaConnect.spec.tolerations",
"KafkaConnect.spec.template.pod.affinity",
"KafkaConnect.spec.template.pod.tolerations",
"spec: # affinity: # tolerations: #",
"spec: # template: pod: affinity: # tolerations: #",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties",
"kind: ConfigMap apiVersion: v1 metadata: name: kafka-connect-metrics labels: app: strimzi data: connect-metrics-config.yaml: | <YAML>",
"metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-connect-metrics key: connect-metrics-config.yaml",
"apiVersion: kafka.strimzi.io/v1beta1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkaconnects2i S2I-CLUSTER",
"KafkaConnectS2I.spec.affinity",
"KafkaConnectS2I.spec.tolerations",
"KafkaConnectS2I.spec.template.pod.affinity",
"KafkaConnectS2I.spec.template.pod.tolerations",
"spec: # affinity: # tolerations: #",
"spec: # template: pod: affinity: # tolerations: #",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties",
"kind: ConfigMap apiVersion: v1 metadata: name: kafka-connect-s2i-metrics labels: app: strimzi data: connect-s2i-metrics-config.yaml: | <YAML>",
"metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-connect-s2i-metrics key: connect-s2i-metrics-config.yaml",
"apiVersion: kafka.strimzi.io/v1beta1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkamirrormaker MIRROR-MAKER",
"KafkaMirrorMaker.spec.affinity",
"KafkaMirrorMaker.spec.tolerations",
"KafkaMirrorMaker.spec.template.pod.affinity",
"KafkaMirrorMaker.spec.template.pod.tolerations",
"spec: # affinity: # tolerations: #",
"spec: # template: pod: affinity: # tolerations: #",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties",
"kind: ConfigMap apiVersion: v1 metadata: name: kafka-mm-metrics labels: app: strimzi data: mm-metrics-config.yaml: | <YAML>",
"metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-mm-metrics key: mm-metrics-config.yaml",
"apiVersion: kafka.strimzi.io/v1beta1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkamirrormaker2 MIRROR-MAKER-2",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j.properties",
"kind: ConfigMap apiVersion: v1 metadata: name: kafka-mm2-metrics labels: app: strimzi data: mm2-metrics-config.yaml: | <YAML>",
"metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-mm2-metrics key: mm2-metrics-config.yaml",
"apiVersion: kafka.strimzi.io/v1alpha1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkabridge KAFKA-BRIDGE",
"logging: type: external name: my-config-map",
"logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: log4j2.properties",
"apiVersion: kafka.strimzi.io/v1alpha1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkauser KAFKA-USER",
"apiVersion: kafka.strimzi.io/v1beta1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkatopic KAFKA-TOPIC",
"apiVersion: kafka.strimzi.io/v1beta1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkaconnector KAFKA-CONNECTOR",
"apiVersion: kafka.strimzi.io/v1alpha1",
"apiVersion: kafka.strimzi.io/v1beta2",
"edit kafkarebalance KAFKA-REBALANCE",
"apiVersion: kafka.strimzi.io/v1alpha1",
"apiVersion: kafka.strimzi.io/v1beta2"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/deploying_and_upgrading_amq_streams_on_openshift/assembly-upgrade-str |
11.13. Stopping Volumes | 11.13. Stopping Volumes To stop a volume, use the following command: For example, to stop test-volume: | [
"gluster volume stop VOLNAME",
"gluster volume stop test-volume Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Stopping volume test-volume has been successful"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/stopping_volumes |
Chapter 14. Managing custom file type content | Chapter 14. Managing custom file type content In Satellite, you might require methods of managing and distributing SSH keys and source code files or larger files such as virtual machine images and ISO files. To achieve this, custom products in Red Hat Satellite include repositories for custom file types. This provides a generic method to incorporate arbitrary files in a product. You can upload files to the repository and synchronize files from an upstream Satellite Server. When you add files to a custom file type repository, you can use the normal Satellite management functions such as adding a specific version to a content view to provide version control and making the repository of files available on various Capsule Servers. You must download the files on clients over HTTP or HTTPS using curl -O . You can create a file type repository in Satellite Server only in a custom product, but there is flexibility in how you create the repository source. You can create an independent repository source in a directory on Satellite Server, or on a remote HTTP server, and then synchronize the contents of that directory into Satellite. This method is useful when you have multiple files to add to a Satellite repository. 14.1. Creating a local source for a custom file type repository You can create a custom file type repository source, from a directory of files, on the base system where Satellite is installed using Pulp Manifest . You can then synchronize the files into a repository and manage the custom file type content like any other content type. Use this procedure to configure a repository in a directory on the base system where Satellite is installed. To create a file type repository in a directory on a remote server, see Section 14.2, "Creating a remote source for a custom file type repository" . Procedure Ensure the Utils repository is enabled. Enable the satellite-utils module: Install the Pulp Manifest package: Note that this command stops the Satellite service and re-runs satellite-installer . Alternatively, to prevent downtime caused by stopping the service, you can use the following: Create a directory that you want to use as the file type repository, such as: Add the parent folder into allowed import paths: Add files to the directory or create a test file: Run the Pulp Manifest command to create the manifest: Verify the manifest was created: Now, you can import your local source as a custom file type repository. Use the file:// URL scheme and the name of the directory to specify an Upstream URL , such as file:///var/lib/pulp/ local_repos / my_file_repo . For more information, see Section 14.3, "Creating a custom file type repository" . If you update the contents of your directory, re-run Pulp Manifest and sync the repository in Satellite. For more information, see Section 4.7, "Synchronizing repositories" . 14.2. Creating a remote source for a custom file type repository You can create a custom file type repository source from a directory of files that is external to Satellite Server using Pulp Manifest . You can then synchronize the files into a repository on Satellite Server over HTTP or HTTPS and manage the custom file type content like any other content type. Use this procedure to configure a repository in a directory on a remote server. To create a file type repository in a directory on the base system where Satellite Server is installed, see Section 14.1, "Creating a local source for a custom file type repository" . Prerequisites You have a server running Red Hat Enterprise Linux 8 registered to your Satellite or the Red Hat CDN. Your server has an entitlement to the Red Hat Enterprise Linux Server and Red Hat Satellite Utils repositories. You have installed an HTTP server. For more information about configuring a web server, see Setting up the Apache HTTP web server in Red Hat Enterprise Linux 8 Deploying different types of servers . Procedure On your server, enable the required repositories: Enable the satellite-utils module: Install the Pulp Manifest package: Create a directory that you want to use as the file type repository in the HTTP server's public folder: Add files to the directory or create a test file: Run the Pulp Manifest command to create the manifest: Verify the manifest was created: Now, you can import your remote source as a custom file type repository. Use the path to the directory to specify an Upstream URL , such as http://server.example.com/ my_file_repo . For more information, see Section 14.3, "Creating a custom file type repository" . If you update the contents of your directory, re-run Pulp Manifest and sync the repository in Satellite. For more information, see Section 4.7, "Synchronizing repositories" . 14.3. Creating a custom file type repository The procedure for creating a custom file type repository is the same as the procedure for creating any custom content, except that when you create the repository, you select the file type. You must create a product and then add a custom repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select a product that you want to create a repository for. On the Repositories tab, click New Repository . In the Name field, enter a name for the repository. Satellite automatically completes the Label field based on the name. Optional: In the Description field, enter a description for the repository. From the Type list, select file as type of repository. Optional: In the Upstream URL field, enter the URL of the upstream repository to use as a source. If you do not enter an upstream URL, you can manually upload packages. For more information, see Section 14.4, "Uploading files to a custom file type repository" . Select Verify SSL to verify that the SSL certificates of the repository are signed by a trusted CA. Optional: In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication. Optional: In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication. From the Mirroring Policy list, select the type of content synchronization Satellite Server performs. For more information, see Section 4.12, "Mirroring policies overview" . Optional: In the HTTP Proxy Policy field, select an HTTP proxy. By default, it uses the Global Default HTTP proxy. Optional: You can clear the Unprotected checkbox to require a subscription entitlement certificate for accessing this repository. By default, the repository is published through HTTP. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository. Optional: In the SSL Client Cert field, select the SSL Client Certificate for the repository. Optional: In the SSL Client Key field, select the SSL Client Key for the repository. Click Save to create the repository. CLI procedure Create a custom product: Table 14.1. Optional parameters for the hammer product create command Option Description --gpg-key-id gpg_key_id GPG key numeric identifier --sync-plan-id sync_plan_id Sync plan numeric identifier --sync-plan sync_plan_name Sync plan name to search by Create a file type repository: Table 14.2. Optional parameters for the hammer repository create command Option Description --checksum-type sha_version Repository checksum (either sha1 or sha256 ) --download-policy policy_name Download policy for repositories (either immediate or on_demand ) --gpg-key-id gpg_key_id GPG key numeric identifier --gpg-key gpg_key_name Key name to search by --mirror-on-sync boolean Must this repo be mirrored from the source, and stale packages removed, when synced? Set to true or false , yes or no , 1 or 0 . --publish-via-http boolean Must this also be published using HTTP? Set to true or false , yes or no , 1 or 0 . --upstream-password repository_password Password for the upstream repository user --upstream-username repository_username Upstream repository user, if required for authentication --url My_Repository_URL URL of the remote repository --verify-ssl-on-sync boolean Verify that the upstream SSL certificates of the remote repository are signed by a trusted CA? Set to true or false , yes or no , 1 or 0 . 14.4. Uploading files to a custom file type repository Use this procedure to upload files to a custom file type repository. Procedure In the Satellite web UI, navigate to Content > Products . Select a custom product by name. Select a file type repository by name. Click Browse to search and select the file you want to upload. Click Upload to upload the selected file to Satellite Server. Visit the URL where the repository is published to see the file. CLI procedure The --path option can indicate a file, a directory of files, or a glob expression of files. Globs must be escaped by single or double quotes. 14.5. Downloading files to a host from a custom file type repository You can download files to a client over HTTPS using curl -O , and optionally over HTTP if the Unprotected option for repositories is selected. Prerequisites You have a custom file type repository. For more information, see Section 14.3, "Creating a custom file type repository" . You know the name of the file you want to download to clients from the file type repository. To use HTTPS you require the following certificates on the client: The katello-server-ca.crt . For more information, see Importing the Katello Root CA Certificate in Administering Red Hat Satellite . An Organization Debug Certificate. For more information, see Creating an Organization Debug Certificate in Administering Red Hat Satellite . Procedure In the Satellite web UI, navigate to Content > Products . Select a custom product by name. Select a file type repository by name. Ensure to select the Unprotected checkbox to access the repository published through HTTP. Copy the Published At URL. On your client, download the file from Satellite Server: For HTTPS: For HTTP: CLI procedure List the file type repositories. Display the repository information. If Unprotected is enabled, the output is similar to this: If Unprotected is not enabled, the output is similar to this: On your client, download the file from Satellite Server: For HTTPS: For HTTP: | [
"subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-utils",
"satellite-maintain packages install python3.11-pulp_manifest",
"satellite-maintain packages unlock satellite-maintain packages install python39-pulp_manifest satellite-maintain packages lock",
"mkdir -p /var/lib/pulp/ local_repos / my_file_repo",
"satellite-installer --foreman-proxy-content-pulpcore-additional-import-paths /var/lib/pulp/ local_repos",
"touch /var/lib/pulp/ local_repos / my_file_repo / test.txt",
"pulp-manifest /var/lib/pulp/ local_repos / my_file_repo",
"ls /var/lib/pulp/ local_repos / my_file_repo PULP_MANIFEST test.txt",
"subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-utils",
"dnf install python3.11-pulp_manifest",
"mkdir /var/www/html/pub/ my_file_repo",
"touch /var/www/html/pub/ my_file_repo / test.txt",
"pulp-manifest /var/www/html/pub/ my_file_repo",
"ls /var/www/html/pub/ my_file_repo PULP_MANIFEST test.txt",
"hammer product create --description \" My_Files \" --name \" My_File_Product \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"",
"hammer repository create --content-type \"file\" --name \" My_Files \" --organization \" My_Organization \" --product \" My_File_Product \"",
"hammer repository upload-content --id repo_ID --organization \" My_Organization \" --path example_file",
"curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"hammer repository list --content-type file ---|------------|-------------------|--------------|---- ID | NAME | PRODUCT | CONTENT TYPE | URL ---|------------|-------------------|--------------|---- 7 | My_Files | My_File_Product | file | ---|------------|-------------------|--------------|----",
"hammer repository info --name \" My_Files \" --organization-id My_Organization_ID --product \" My_File_Product \"",
"Publish Via HTTP: yes Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /",
"Publish Via HTTP: no Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /",
"curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File",
"curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/Managing_Custom_File_Type_Content_content-management |
Chapter 9. Using the Ceph block device Python module | Chapter 9. Using the Ceph block device Python module The rbd python module provides file-like access to Ceph block device images. In order to use this built-in tool, import the rbd and rados Python modules. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Connect to RADOS and open an IO context: cluster = rados.Rados(conffile='my_ceph.conf') cluster.connect() ioctx = cluster.open_ioctx('mypool') Instantiate an :class:rbd.RBD object, which you use to create the image: To perform I/O on the image, instantiate an :class:rbd.Image object: This writes 'foo' to the first 600 bytes of the image. Note that data cannot be :type:unicode - librbd does not know how to deal with characters wider than a :c:type:char . Close the image, the IO context and the connection to RADOS: To be safe, each of these calls must to be in a separate :finally block: This can be cumbersome, so the Rados , Ioctx , and Image classes can be used as context managers that close or shut down automatically. Using them as context managers, the above example becomes: | [
"cluster = rados.Rados(conffile='my_ceph.conf') cluster.connect() ioctx = cluster.open_ioctx('mypool')",
"rbd_inst = rbd.RBD() size = 4 * 1024**3 # 4 GiB rbd_inst.create(ioctx, 'myimage', size)",
"image = rbd.Image(ioctx, 'myimage') data = 'foo' * 200 image.write(data, 0)",
"image.close() ioctx.close() cluster.shutdown()",
"import rados import rbd cluster = rados.Rados(conffile='my_ceph_conf') try: ioctx = cluster.open_ioctx('my_pool') try: rbd_inst = rbd.RBD() size = 4 * 1024**3 # 4 GiB rbd_inst.create(ioctx, 'myimage', size) image = rbd.Image(ioctx, 'myimage') try: data = 'foo' * 200 image.write(data, 0) finally: image.close() finally: ioctx.close() finally: cluster.shutdown()",
"with rados.Rados(conffile='my_ceph.conf') as cluster: with cluster.open_ioctx('mypool') as ioctx: rbd_inst = rbd.RBD() size = 4 * 1024**3 # 4 GiB rbd_inst.create(ioctx, 'myimage', size) with rbd.Image(ioctx, 'myimage') as image: data = 'foo' * 200 image.write(data, 0)"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/block_device_guide/using-the-ceph-block-device-python-module_block |
Chapter 5. Configuring network access | Chapter 5. Configuring network access Configure network access for your Data Grid deployment and find out about internal network services. 5.1. Exposing Data Grid clusters on the network Make Data Grid clusters available on the network so you can access Data Grid Console as well as REST and Hot Rod endpoints. By default, the Data Grid chart exposes deployments through a Route but you can configure it to expose clusters via Load Balancer or Node Port. You can also configure the Data Grid chart so that deployments are not exposed on the network and only available internally to the OpenShift cluster. Procedure Specify one of the following for the deploy.expose.type field: Option Description Route Exposes Data Grid through a route. This is the default value. LoadBalancer Exposes Data Grid through a load balancer service. NodePort Exposes Data Grid through a node port service. "" (empty value) Disables exposing Data Grid on the network. Optionally specify a hostname with the deploy.expose.host field if you expose Data Grid through a route. Optionally specify a port with the deploy.expose.nodePort field if you expose Data Grid through a node port service. Install or upgrade your Data Grid Helm release. 5.2. Retrieving network service details Get network service details so you can connect to Data Grid clusters. Prerequisites Expose your Data Grid cluster on the network. Have an oc client. Procedure Use one of the following commands to retrieve network service details: If you expose Data Grid through a route: If you expose Data Grid through a load balancer or node port service: 5.3. Network services The Data Grid chart creates default network services for internal access. Service Port Protocol Description <helm_release_name> 11222 TCP Provides access to Data Grid Hot Rod and REST endpoints. <helm_release_name> 11223 TCP Provides access to Data Grid metrics. <helm_release_name>-ping 8888 TCP Allows Data Grid pods to discover each other and form clusters. You can retrieve details about internal network services as follows: | [
"oc get routes",
"oc get services",
"oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) infinispan ClusterIP 192.0.2.0 <none> 11222/TCP,11223/TCP infinispan-ping ClusterIP None <none> 8888/TCP"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/building_and_deploying_data_grid_clusters_with_helm/network-access |
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint | Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/accessing-the-rados-object-gateway-s3-endpoint_rhodf |
Chapter 96. ExternalConfigurationVolumeSource schema reference | Chapter 96. ExternalConfigurationVolumeSource schema reference The type ExternalConfigurationVolumeSource has been deprecated. Please use AdditionalVolume instead. Used in: ExternalConfiguration Property Property type Description name string Name of the volume which will be added to the Kafka Connect pods. secret SecretVolumeSource Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. configMap ConfigMapVolumeSource Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ExternalConfigurationVolumeSource-reference |
Chapter 122. Configuring Single Sign-On for the RHEL 8 web console in the IdM domain | Chapter 122. Configuring Single Sign-On for the RHEL 8 web console in the IdM domain Using Single Sign-on (SSO) authentication provided by Identity Management (IdM) in the RHEL 8 web console has the following advantages: Users with a Kerberos ticket in the IdM domain do not need to provide login credentials to access the web console. Users with a certificate issued by the IdM certificate authority (CA) do not need to provide login credentials to access the web console. The web console server automatically switches to a certificate issued by the IdM certificate authority and accepted by browsers. Certificate configuration is not necessary. Configuring SSO for logging into the RHEL web console requires to: Add machines to the IdM domain using the RHEL 8 web console. If you want to use Kerberos for authentication, you must obtain a Kerberos ticket on your machine. Allow administrators on the IdM server to run any command on any host. Prerequisites The RHEL web console service is installed on a RHEL 8 system. For details, see Installing the web console . The IdM client is installed on the system where the RHEL web console service is running. For details, see IdM client installation . 122.1. Logging in to the web console using Kerberos authentication As an Identity Management (IdM) user, you can use Single Sign-On (SSO) authentication to automatically access the RHEL web console in your browser. Important With SSO, you usually do not have any administrative privileges in the web console. This only works if you configure passwordless sudo. The web console does not interactively ask for a sudo password. Prerequisites The IdM domain is resolvable by DNS. For instance, the SRV records of the Kerberos server are resolvable: If the system where you are running your browser is a RHEL 8 system and has been joined to the IdM domain , you are using the same DNS as the web console server and no DNS configuration is necessary. You have configured the web console server for SSO authentication. The host on which the web console service is running is an IdM client. You have configured the web console client for SSO authentication. Procedure Obtain your Kerberos ticket-granting ticket: Enter the fully qualified name of the host on which the web console service is running into your browser: At this point, you are successfully connected to the RHEL web console and you can start with configuration. For example, you can join a RHEL 8 system to the IdM domain in the web console . 122.2. Joining a RHEL 8 system to an IdM domain using the web console You can use the web console to join a Red Hat Enterprise Linux 8 system to the Identity Management (IdM) domain. Prerequisites The IdM domain is running and reachable from the client you want to join. You have the IdM domain administrator credentials. You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Configuration field of the Overview tab click Join Domain . In the Join a Domain dialog box, enter the host name of the IdM server in the Domain Address field. In the Domain administrator name field, enter the user name of the IdM administration account. In the Domain administrator password , add a password. Click Join . Verification If the RHEL 8 web console did not display an error, the system has been joined to the IdM domain and you can see the domain name in the System screen. To verify that the user is a member of the domain, click the Terminal page and type the id command: Additional resources Planning Identity Management Installing Identity Management | [
"host -t SRV _kerberos._udp.idm.example.com _kerberos._udp.idm.example.com has SRV record 0 100 88 dc.idm.example.com",
"kinit [email protected] Password for [email protected]:",
"https:// <dns_name> :9090",
"id euid=548800004(example_user) gid=548800004(example_user) groups=548800004(example_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/configuring_single_sign_on_for_the_rhel_8_web_console_in_the_idm_domain_configuring-and-managing-idm |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/4.18_release_notes/making-open-source-more-inclusive |
Chapter 1. Preparing to install on IBM Z and IBM LinuxONE | Chapter 1. Preparing to install on IBM Z and IBM LinuxONE 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 1.2. Choosing a method to install OpenShift Container Platform on IBM Z or IBM LinuxONE You can install OpenShift Container Platform with the Assisted Installer . This method requires no setup for the installer, and is ideal for connected environments like IBM Z(R). See Installing an on-premise cluster using the Assisted Installer for additional details. You can also install OpenShift Container Platform on infrastructure that you provide. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See the Installation process for more information about Assisted Installer and user-provisioned installation processes. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the IBM Z(R) platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 1.2.1. User-provisioned infrastructure installation of OpenShift Container Platform on IBM Z User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform with z/VM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with z/VM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE : You can install OpenShift Container Platform with KVM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE in a restricted network : You can install OpenShift Container Platform with RHEL KVM on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_z_and_ibm_linuxone/preparing-to-install-on-ibm-z |
Chapter 112. FTPS Component | Chapter 112. FTPS Component Available as of Camel version 2.2 This component provides access to remote file systems over the FTP and SFTP protocols. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ftp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> For more information you can look at FTP component 112.1. URI Options The options below are exclusive for the FTPS component. The FTPS component supports 2 options, which are listed below. Name Description Default Type useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The FTPS endpoint is configured using URI syntax: with the following path and query parameters: 112.1.1. Path Parameters (3 parameters): Name Description Default Type host Required Hostname of the FTP server String port Port of the FTP server int directoryName The starting directory String 112.1.2. Query Parameters (122 parameters): Name Description Default Type binary (common) Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). false boolean charset (common) This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. String disconnect (common) Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead. false boolean doneFileName (common) Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only USDfile.name and USDfile.name.noext is supported as dynamic placeholders. String fileName (common) Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-USDdate:now:yyyyMMdd.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. String passiveMode (common) Sets passive mode connections. Default is active mode connections. false boolean separator (common) Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name UNIX PathSeparator transferLoggingInterval Seconds (common) Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations takes longer time. 5 int transferLoggingLevel (common) Configure the logging level to use when logging the progress of upload and download operations. DEBUG LoggingLevel transferLoggingVerbose (common) Configures whether the perform verbose (fine grained) logging of the progress of upload and download operations. false boolean fastExistsCheck (common) If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delete (consumer) If true, the file will be deleted after it is processed successfully. false boolean moveFailed (consumer) Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. String noop (consumer) If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. false boolean preMove (consumer) Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. String preSort (consumer) When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. false boolean recursive (consumer) If a directory, will look for files in all the sub-directories as well. false boolean resumeDownload (consumer) Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean streamDownload (consumer) Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. false boolean directoryMustExist (consumer) Similar to startingDirectoryMustExist but this applies during polling recursive sub directories. false boolean download (consumer) Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern handleDirectoryParser AbsoluteResult (consumer) Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths The reason for this is that some FTP servers may return file names with absolute paths, and if so then the FTP component needs to handle this by converting the returned path into a relative path. false boolean ignoreFileNotFoundOr PermissionError (consumer) Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exists or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead. false boolean inProgressRepository (consumer) A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. IdempotentRepository localWorkDirectory (consumer) When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. String onCompletionException Handler (consumer) To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. ExceptionHandler pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy processStrategy (consumer) A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. GenericFileProcess Strategy receiveBufferSize (consumer) The receive (download) buffer size Used only by FTPClient 32768 int startingDirectoryMustExist (consumer) Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn't exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will thrown an exception if the directory doesn't exist. false boolean useList (consumer) Whether to allow using LIST command when downloading a file. Default is true. In some use cases you may want to download a specific file and are not allowed to use the LIST command, and therefore you can set this option to false. Notice when using this option, then the specific file to download does not include meta-data information such as file size, timestamp, permissions etc, because those information is only possible to retrieve when LIST command is in use. true boolean fileExist (producer) What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. Append - adds content to the existing file. Fail - throws a GenericFileOperationException, indicating that there is already an existing file. Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Override GenericFileExist flatten (producer) Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. false boolean jailStartingDirectory (producer) Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. true boolean moveExisting (producer) Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. String tempFileName (producer) The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. String tempPrefix (producer) This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. String allowNullBody (producer) Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. false boolean chmod (producer) Allows you to set chmod on the stored file. For example chmod=640. String disconnectOnBatchComplete (producer) Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server. false boolean eagerDeleteTargetFile (producer) Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. true boolean keepLastModified (producer) Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. false boolean moveExistingFileStrategy (producer) Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided FileMoveExisting Strategy sendNoop (producer) Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off. true boolean activePortRange (advanced) Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, eg 10000-19999 to include all 1xxxx ports. String autoCreate (advanced) Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. true boolean bufferSize (advanced) Write buffer sized in bytes. 131072 int connectTimeout (advanced) Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH 10000 int ftpClient (advanced) To use a custom instance of FTPClient FTPClient ftpClientConfig (advanced) To use a custom instance of FTPClientConfig to configure the FTP client the endpoint should use. FTPClientConfig ftpClientConfigParameters (advanced) Used by FtpComponent to provide additional parameters for the FTPClientConfig Map ftpClientParameters (advanced) Used by FtpComponent to provide additional parameters for the FTPClient Map maximumReconnectAttempts (advanced) Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior. int reconnectDelay (advanced) Delay in millis Camel will wait before performing a reconnect attempt. long siteCommand (advanced) Sets optional site command(s) to be executed after successful login. Multiple site commands can be separated using a new line character. String soTimeout (advanced) Sets the so timeout Used only by FTPClient 300000 int stepwise (advanced) Sets whether we should stepwise change directories while traversing file structures when downloading files, or as well when uploading a file to a directory. You can disable this if you for example are in a situation where you cannot change directory on the FTP server due security reasons. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean throwExceptionOnConnect Failed (advanced) Should an exception be thrown if connection failed (exhausted) By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method. false boolean timeout (advanced) Sets the data timeout for waiting for reply Used only by FTPClient 30000 int antExclude (filter) Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. String antFilterCaseSensitive (filter) Sets case sensitive flag on ant filter true boolean antInclude (filter) Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. String eagerMaxMessagesPerPoll (filter) Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. true boolean exclude (filter) Is used to exclude files, if filename matches the regex pattern (matching is case in-senstive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris String filter (filter) Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. GenericFileFilter filterDirectory (filter) Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as USDdate:now:yyyMMdd String filterFile (filter) Filters the file based on Simple language. For example to filter on file size, you can use USDfile:size 5000 String idempotent (filter) Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. false Boolean idempotentKey (filter) To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=USDfile:name-USDfile:size String idempotentRepository (filter) A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryMessageIdRepository if none is specified and idempotent is true. IdempotentRepository include (filter) Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris String maxDepth (filter) The maximum depth to traverse when recursively processing a directory. 2147483647 int maxMessagesPerPoll (filter) To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. int minDepth (filter) The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. int move (filter) Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. String exclusiveReadLockStrategy (lock) Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. GenericFileExclusive ReadLockStrategy readLock (lock) Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: none - No read lock is in use markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. fileLock - is for using java.nio.channels.FileLock. This option is not avail on Windows or the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. none String readLockCheckInterval (lock) Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 1000 long readLockDeleteOrphanLock Files (lock) Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. true boolean readLockIdempotentRelease Async (lock) Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockIdempotentRelease AsyncPoolSize (lock) The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option. int readLockIdempotentRelease Delay (lock) Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true. int readLockIdempotentRelease ExecutorService (lock) To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option. ScheduledExecutor Service readLockLoggingLevel (lock) Logging level used when a read lock could not be acquired. By default a WARN is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. DEBUG LoggingLevel readLockMarkerFile (lock) Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. true boolean readLockMinAge (lock) This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. 0 long readLockMinLength (lock) This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. 1 long readLockRemoveOnCommit (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockRemoveOnRollback (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). true boolean readLockTimeout (lock) Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 10000 long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean shuffle (sort) To shuffle the list of files (sort in random order) false boolean sortBy (sort) Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. String sorter (sort) Pluggable sorter as a java.util.Comparator class. Comparator account (security) Account to use for login String disableSecureDataChannel Defaults (security) Use this option to disable default options when using secure data channel. This allows you to be in full control what the execPbsz and execProt setting should be used. Default is false false boolean execPbsz (security) When using secure data channel you can set the exec protection buffer size Long execProt (security) The exec protection level PROT command. C - Clear S - Safe(SSL protocol only) E - Confidential(SSL protocol only) P - Private String ftpClientKeyStore Parameters (security) Set the key store parameters Map ftpClientTrustStore Parameters (security) Set the trust store parameters Map isImplicit (security) Set the security mode(Implicit/Explicit). true - Implicit Mode / False - Explicit Mode false boolean password (security) Password to use for login String securityProtocol (security) Set the underlying security protocol. TLS String sslContextParameters (security) Gets the JSSE configuration that overrides any settings in FtpsEndpoint#ftpClientKeyStoreParameters, ftpClientTrustStoreParameters, and FtpsConfiguration#getSecurityProtocol(). SSLContextParameters username (security) Username to use for login String 112.2. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.ftps.enabled Enable ftps component true Boolean camel.component.ftps.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.ftps.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ftp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"ftps:host:port/directoryName"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ftps-component |
Preface | Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide helps you to understand the installation requirements and processes behind installing Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_installation_guide/pr01 |
14.13. Displaying Per-guest Virtual Machine Information | 14.13. Displaying Per-guest Virtual Machine Information This section provides information about displaying virtual machine information for each guest. 14.13.1. Displaying the Guest Virtual Machines To display the guest virtual machine list and their current states with virsh : Other options available include: --inactive option lists the inactive guest virtual machines (that is, guest virtual machines that have been defined but are not currently active) --all option lists all guest virtual machines. For example: There are seven states that can be visible using this command: Running - The running state refers to guest virtual machines which are currently active on a CPU. Idle - The idle state indicates that the domain is idle, and may not be running or able to run. This can be caused because the domain is waiting on IO (a traditional wait state) or has gone to sleep because there was nothing else for it to do. Paused - The paused state lists domains that are paused. This occurs if an administrator uses the paused button in virt-manager or virsh suspend . When a guest virtual machine is paused it consumes memory and other resources but it is ineligible for scheduling and CPU resources from the hypervisor. Shutdown - The shutdown state is for guest virtual machines in the process of shutting down. The guest virtual machine is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest virtual machine operating systems; some operating systems do not respond to these signals. Shut off - The shut off state indicates that the domain is not running. This can be caused when a domain completely shuts down or has not been started. Crashed - The crashed state indicates that the domain has crashed and can only occur if the guest virtual machine has been configured not to restart on crash. Dying - Domains in the dying state are in is in process of dying, which is a state where the domain has not completely shut-down or crashed. --managed-save Although this option alone does not filter the domains, it will list the domains that have managed save state enabled. In order to actually list the domains separately you will need to use the --inactive option as well. --name is specified domain names are printed in a list. If --uuid is specified the domain's UUID is printed instead. Using the option --table specifies that a table style output should be used. All three commands are mutually exclusive --title This command must be used with --table output. --title will cause an extra column to be created in the table with the short domain description (title). --persistent includes persistent domains in a list. Use the --transient option. --with-managed-save lists the domains that have been configured with managed save. To list the commands without it, use the command --without-managed-save --state-running filters out for the domains that are running, --state-paused for paused domains, --state-shutoff for domains that are turned off, and --state-other lists all states as a fallback. --autostart this option will cause the auto-starting domains to be listed. To list domains with this feature disabled, use the option --no-autostart . --with-snapshot will list the domains whose snapshot images can be listed. To filter for the domains without a snapshot, use the option --without-snapshot For an example of virsh vcpuinfo output, refer to Section 14.13.2, "Displaying Virtual CPU Information" | [
"virsh list",
"virsh list --all Id Name State ---------------------------------- 0 Domain-0 running 1 Domain202 paused 2 Domain010 inactive 3 Domain9600 crashed",
"virsh list --title --name Id Name State Title 0 Domain-0 running Mailserver1 2 rhelvm paused"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-displaying_per_guest_virtual_machine_information |
Chapter 2. Creating software for RPM packaging | Chapter 2. Creating software for RPM packaging To prepare software for RPM packaging, you must understand what source code is and how to create software. 2.1. What is source code Source code is human-readable instructions to the computer that describe how to perform a computation. Source code is expressed by using a programming language. The following versions of the Hello World program written in three different programming languages cover major RPM Package Manager use cases: Hello World written in Bash The bello project implements Hello World in Bash . The implementation contains only the bello shell script. The purpose of this program is to output Hello World on the command line. The bello file has the following contents: Hello World written in Python The pello project implements Hello World in Python . The implementation contains only the pello.py program. The purpose of the program is to output Hello World on the command line. The pello.py file has the following contents: Hello World written in C The cello project implements Hello World in C. The implementation contains only the cello.c and Makefile files. The resulting tar.gz archive therefore has two files in addition to the LICENSE file. The purpose of the program is to output Hello World on the command line. The cello.c file has the following contents: Note The packaging process is different for each version of the Hello World program. 2.2. Methods of creating software You can convert the human-readable source code into machine code by using one the following methods: Natively compile software. Interpret software by using a language interpreter or language virtual machine. You can either raw-interpret or byte-compile software. 2.2.1. Natively compiled software Natively compiled software is software written in a programming language that compiles to machine code with a resulting binary executable file. Natively compiled software is standalone software. Note Natively compiled RPM packages are architecture-specific. If you compile such software on a computer that uses a 64-bit (x86_64) AMD or Intel processor, it does not run on a 32-bit (x86) AMD or Intel processor. The resulting package has the architecture specified in its name. 2.2.2. Interpreted software Some programming languages, such as Bash or Python , do not compile to machine code. Instead, a language interpreter or a language virtual machine executes the programs' source code step-by-step without prior transformations. Note Software written entirely in interpreted programming languages is not architecture-specific. Therefore, the resulting RPM package has the noarch string in its name. You can either raw-interpret or byte-compile software written in interpreted languages: Raw-interpreted software You do not need to compile this type of software. Raw-interpreted software is directly executed by the interpreter. Byte-compiled software You must first compile this type of software into bytecode, which is then executed by the language virtual machine. Note Some byte-compiled languages can be either raw-interpreted or byte-compiled. Note that the way you build and package software by using RPM is different for these two software types. 2.3. Building software from source During the software building process, the source code is turned into software artifacts that you can package by using RPM. 2.3.1. Building software from natively compiled code You can build software written in a compiled language into an executable by using one of the following methods: Manual building Automated building 2.3.1.1. Manually building a sample C program You can use manual building to build software written in a compiled language. A sample Hello World program written in C ( cello.c ) has the following contents: Procedure Invoke the C compiler from the GNU Compiler Collection to compile the source code into binary: Run the resulting binary cello : 2.3.1.2. Setting up automated building for a sample C program Large-scale software commonly uses automated building. You can set up automated building by creating the Makefile file and then running the GNU make utility. Procedure Create the Makefile file with the following content in the same directory as cello.c : Note that the lines under cello: and clean: must begin with a tabulation character (tab). Build the software: Because a build is already available in the current directory, enter the make clean command, and then enter the make command again: Note that trying to build the program again at this point has no effect because the GNU make system detects the existing binary: Run the program: 2.3.2. Interpreting source code You can convert the source code written in an interpreted programming language into machine code by using one of the following methods: Byte-compiling The procedure for byte-compiling software varies depending on the following factors: Programming language Language's virtual machine Tools and processes used with that language Note You can byte-compile software written, for example, in Python . Python software intended for distribution is often byte-compiled, but not in the way described in this document. The described procedure aims not to conform to the community standards, but to be simple. For real-world Python guidelines, see Software Packaging and Distribution . You can also raw-interpret Python source code. However, the byte-compiled version is faster. Therefore, RPM packagers prefer to package the byte-compiled version for distribution to end users. Raw-interpreting Software written in shell scripting languages, such as Bash , is always executed by raw-interpreting. 2.3.2.1. Byte-compiling a sample Python program By choosing byte-compiling over raw-interpreting of Python source code, you can create faster software. A sample Hello World program written in the Python programming language ( pello.py ) has the following contents: Procedure Byte-compile the pello.py file: Verify that a byte-compiled version of the file is created: Note that the package version in the output might differ depending on which Python version is installed. Run the program in pello.py : 2.3.2.2. Raw-interpreting a sample Bash program A sample Hello World program written in Bash shell built-in language ( bello ) has the following contents: Note The shebang ( #! ) sign at the top of the bello file is not part of the programming language source code. Use the shebang to turn a text file into an executable. The system program loader parses the line containing the shebang to get a path to the binary executable, which is then used as the programming language interpreter. Procedure Make the file with source code executable: Run the created file: | [
"#!/bin/bash printf \"Hello World\\n\"",
"#!/usr/bin/python3 print(\"Hello World\")",
"#include <stdio.h> int main(void) { printf(\"Hello World\\n\"); return 0; }",
"#include <stdio.h> int main(void) { printf(\"Hello World\\n\"); return 0; }",
"gcc -g -o cello cello.c",
"./cello Hello World",
"cello: gcc -g -o cello cello.c clean: rm cello",
"make make: 'cello' is up to date.",
"make clean rm cello make gcc -g -o cello cello.c",
"make make: 'cello' is up to date.",
"./cello Hello World",
"print(\"Hello World\")",
"python -m compileall pello.py",
"ls __pycache__ pello.cpython-311.pyc",
"python pello.py Hello World",
"#!/bin/bash printf \"Hello World\\n\"",
"chmod +x bello",
"./bello Hello World"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/creating-software-for-rpm-packaging_packaging-and-distributing-software |
14.6.3. WINS (Windows Internetworking Name Server) | 14.6.3. WINS (Windows Internetworking Name Server) Either a Samba server or a Windows NT server can function as a WINS server. When a WINS server is used with NetBIOS enabled, UDP unicasts can be routed which allows name resolution across networks. Without a WINS server, the UDP broadcast is limited to the local subnet and therefore cannot be routed to other subnets, workgroups, or domains. If WINS replication is necessary, do not use Samba as your primary WINS server, as Samba does not currently support WINS replication. In a mixed NT/2000/2003 server and Samba environment, it is recommended that you use the Microsoft WINS capabilities. In a Samba-only environment, it is recommended that you use only one Samba server for WINS. The following is an example of the smb.conf file in which the Samba server is serving as a WINS server: Note All servers (including Samba) should connect to a WINS server to resolve NetBIOS names. Without WINS, browsing only occurs on the local subnet. Furthermore, even if a domain-wide list is somehow obtained, hosts are not resolvable for the client without WINS. | [
"[global] wins support = Yes"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-WINS |
Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network | Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network In OpenShift Container Platform 4.16, you can install a cluster on bare metal infrastructure that you provision in a restricted network. Important While you might be able to follow this procedure to deploy a cluster on virtualized or cloud environments, you must be aware of additional considerations for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in such an environment. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 4.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 4.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 4.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 4.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.4.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 4.4.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.4.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.4.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 4.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.4.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.4.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Additional resources Validating DNS resolution for user-provisioned infrastructure 4.4.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.4.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.5. Creating a manifest object that includes a customized br-ex bridge As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal platform, you can create a MachineConfig object that includes an NMState configuration file. The NMState configuration file creates a customized br-ex bridge network configuration on each node in your cluster. Consider the following use cases for creating a manifest object that includes a customized br-ex bridge: You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not support making postinstallation changes to the bridge. You want to deploy the bridge on a different interface than the interface available on a host or server IP address. You want to make advanced configurations to the bridge that are not possible with the configure-ovs.sh shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces. Note If you require an environment with a single network interface controller (NIC) and default network settings, use the configure-ovs.sh shell script. After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine Config Operator injects Ignition configuration files into each node in your cluster, so that each node received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-ovs.sh shell script receives a signal to not configure the br-ex bridge. Prerequisites Optional: You have installed the nmstate API so that you can validate the NMState configuration. Procedure Create a NMState configuration file that has decoded base64 information for your customized br-ex bridge network: Example of an NMState configuration for a customized br-ex bridge network interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false # ... 1 Name of the interface. 2 The type of ethernet. 3 The requested state for the interface after creation. 4 Disables IPv4 and IPv6 in this example. 5 The node NIC to which the bridge attaches. Use the cat command to base64-encode the contents of the NMState configuration: USD cat <nmstate_configuration>.yaml | base64 1 1 Replace <nmstate_configuration> with the name of your NMState resource YAML file. Create a MachineConfig manifest file and define a customized br-ex bridge network configuration analogous to the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml # ... 1 For each node in your cluster, specify the hostname path to your node and the base-64 encoded Ignition configuration file data for the machine type. If you have a single global configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that you want to apply to all nodes in your cluster, you do not need to specify the hostname path for each node. The worker role is the default role for nodes in your cluster. The .yaml extension does not work when specifying the hostname path for each node or all nodes in the MachineConfig manifest file. 2 The name of the policy. 3 Writes the encoded base64 information to the specified path. 4.5.1. Optional: Scaling each machine set to compute nodes To apply a customized br-ex bridge configuration to all compute nodes in your OpenShift Container Platform cluster, you must edit your MachineConfig custom resource (CR) and modify its roles. Additionally, you must create a BareMetalHost CR that defines information for your bare-metal machine, such as hostname, credentials, and so on. After you configure these resources, you must scale machine sets, so that the machine sets can apply the resource configuration to each compute node and reboot the nodes. Prerequisites You created a MachineConfig manifest object that includes a customized br-ex bridge configuration. Procedure Edit the MachineConfig CR by entering the following command: USD oc edit mc <machineconfig_custom_resource_name> Add each compute node configuration to the CR, so that the CR can manage roles for each defined compute node in your cluster. Create a Secret object named extraworker-secret that has a minimal static IP configuration. Apply the extraworker-secret secret to each node in your cluster by entering the following command. This step provides each compute node access to the Ignition config file. USD oc apply -f ./extraworker-secret.yaml Create a BareMetalHost resource and specify the network secret in the preprovisioningNetworkDataName parameter: Example BareMetalHost resource with an attached network secret apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: # ... preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret # ... To manage the BareMetalHost object within the openshift-machine-api namespace of your cluster, change to the namespace by entering the following command: USD oc project openshift-machine-api Get the machine sets: USD oc get machinesets Scale each machine set by entering the following command. You must run this command for each machine set. USD oc scale machineset <machineset_name> --replicas=<n> 1 1 Where <machineset_name> is the name of the machine set and <n> is the number of compute nodes. 4.6. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 4.7. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 4.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. Additional resources Verifying node health 4.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Important The ImageContentSourcePolicy file is generated as an output of oc mirror after the mirroring process is finished. The oc mirror command generates an ImageContentSourcePolicy file which contains the YAML needed to define ImageContentSourcePolicy . Copy the text from this file and paste it into your install-config.yaml file. You must run the 'oc mirror' command twice. The first time you run the oc mirror command, you get a full ImageContentSourcePolicy file. The second time you run the oc mirror command, you only get the difference between the first and second run. Because of this behavior, you must always keep a backup of these files in case you need to merge them into one complete ImageContentSourcePolicy file. Keeping a backup of these two output files ensures that you have a complete ImageContentSourcePolicy file. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 4.9.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Provide the contents of the certificate file that you used for your mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 4.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note For bare metal installations, if you do not assign node IP addresses from the range that is specified in the networking.machineNetwork[].cidr field in the install-config.yaml file, you must include them in the proxy.noProxy field. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources See Recovering from expired control plane certificates for more information about recovering kubelet certificates. 4.11. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 4.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 4.12.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.12.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.12.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 4.12.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 4.12.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 4.12.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 4.12.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 4.12.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 4.12.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.16 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 4.12.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 4.12.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 4.12.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 4.12.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 4.12.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.12.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 4.12.3.7.4. Customizing a live install ISO image for an iSCSI boot device You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with the following information: USD coreos-installer iso customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \ 5 --dest-karg-append netroot=<target_iqn> \ 6 -o custom.iso rhcos-<version>-live.x86_64.iso 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The location of the destination system. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 4 The Ignition configuration for the destination system. 5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 6 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 4.12.3.7.5. Customizing a live install ISO image for an iSCSI boot device with iBFT You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with the following information: USD coreos-installer iso customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/mapper/mpatha \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.firmware=1 \ 5 --dest-karg-append rd.multipath=default \ 6 -o custom.iso rhcos-<version>-live.x86_64.iso 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The path to the device. If you are using multipath, the multipath device, /dev/mapper/mpatha , If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 4 The Ignition configuration for the destination system. 5 The iSCSI parameter is read from the BIOS firmware. 6 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 4.12.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 4.12.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 4.12.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.12.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 4.12.3.8.4. Customizing a live install PXE environment for an iSCSI boot device You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file with the following information: USD coreos-installer pxe customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \ 5 --dest-karg-append netroot=<target_iqn> \ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The location of the destination system. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 4 The Ignition configuration for the destination system. 5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 6 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 4.12.3.8.5. Customizing a live install PXE environment for an iSCSI boot device with iBFT You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file with the following information: USD coreos-installer pxe customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/mapper/mpatha \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.firmware=1 \ 5 --dest-karg-append rd.multipath=default \ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The path to the device. If you are using multipath, the multipath device, /dev/mapper/mpatha , If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 4 The Ignition configuration for the destination system. 5 The iSCSI parameter is read from the BIOS firmware. 6 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 4.12.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.12.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 4.12.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 4.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 4.12.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 4.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 4.12.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 4.12.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.16.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 4.12.5. Installing RHCOS manually on an iSCSI boot device You can manually install RHCOS on an iSCSI target. Prerequisites You are in the RHCOS live environment. You have an iSCSI target that you want to install RHCOS on. Procedure Mount the iSCSI target from the live environment by running the following command: USD iscsiadm \ --mode discovery \ --type sendtargets --portal <IP_address> \ 1 --login 1 The IP address of the target portal. Install RHCOS onto the iSCSI target by running the following command and using the necessary kernel arguments, for example: USD coreos-installer install \ /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \ 2 --append.karg netroot=<target_iqn> \ 3 --console ttyS0,115200n8 --ignition-file <path_to_file> 1 The location you are installing to. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 2 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 3 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . Unmount the iSCSI disk with the following command: USD iscsiadm --mode node --logoutall=all This procedure can also be performed using the coreos-installer iso customize or coreos-installer pxe customize subcommands. 4.12.6. Installing RHCOS on an iSCSI boot device using iBFT On a completely diskless machine, the iSCSI target and initiator values can be passed through iBFT. iSCSI multipathing is also supported. Prerequisites You are in the RHCOS live environment. You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Mount the iSCSI target from the live environment by running the following command: USD iscsiadm \ --mode discovery \ --type sendtargets --portal <IP_address> \ 1 --login 1 The IP address of the target portal. Optional: enable multipathing and start the daemon with the following command: USD mpathconf --enable && systemctl start multipathd.service Install RHCOS onto the iSCSI target by running the following command and using the necessary kernel arguments, for example: USD coreos-installer install \ /dev/mapper/mpatha \ 1 --append-karg rd.iscsi.firmware=1 \ 2 --append-karg rd.multipath=default \ 3 --console ttyS0 \ --ignition-file <path_to_file> 1 The path of a single multipathed device. If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 2 The iSCSI parameter is read from the BIOS firmware. 3 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . Unmount the iSCSI disk: USD iscsiadm --mode node --logout=all This procedure can also be performed using the coreos-installer iso customize or coreos-installer pxe customize subcommands. 4.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 4.16.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.16.2.1. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 4.16.2.2. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.16.2.3. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.16.2.4. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 4.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 4.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.19. steps Validating an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.16.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>",
"iscsiadm --mode node --logoutall=all",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>",
"iscsiadm --mode node --logout=all",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_bare_metal/installing-restricted-networks-bare-metal |
4.4. Diagnosing and Correcting Problems in a Cluster | 4.4. Diagnosing and Correcting Problems in a Cluster For information about diagnosing and correcting problems in a cluster, contact an authorized Red Hat support representative. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-admin-problems-conga-CA |
Chapter 34. Displaying the priority for a process | Chapter 34. Displaying the priority for a process You can display information about the priority of a process and information about the scheduling policy for a process using the sched_getattr attribute. Prerequisites You have administrator privileges. 34.1. The chrt utility The chrt utility checks and adjusts scheduler policies and priorities. It can start new processes with the desired properties or change the properties of a running process. Additional resources chrt(1) man page on your system 34.2. Displaying the process priority using the chrt utility You can display the current scheduling policy and scheduling priority for a specified process. Procedure Run the chrt utility with the -p option, specifying a running process. 34.3. Displaying the process priority using sched_getscheduler() Real-time processes use a set of functions to control policy and priority. You can use the sched_getscheduler() function to display the scheduler policy for a specified process. Procedure Create the get_sched.c source file and open it in a text editor. Add the following lines into the file. The policy variable holds the scheduler policy for the specified process. Compile the program. Run the program with varying policies. Additional resources sched_getscheduler(2) man page on your system 34.4. Displaying the valid range for a scheduler policy You can use the sched_get_priority_min() and sched_get_priority_max() functions to check the valid priority range for a given scheduler policy. Procedure Create the sched_get.c source file and open it in a text editor. Enter the following into the file: Note If the specified scheduler policy is not known by the system, the function returns -1 and errno is set to EINVAL . Note Both SCHED_FIFO and SCHED_RR can be any number within the range of 1 to 99 . POSIX is not guaranteed to honor this range, however, and portable programs should use these functions. Save the file and exit the editor. Compile the program. The sched_get program is now ready and can be run from the directory in which it is saved. Additional resources sched_get_priority_min(2) and sched_get_priority_max(2) man pages on your system 34.5. Displaying the timeslice for a process The SCHED_RR (round-robin) policy differs slightly from the SCHED_FIFO (first-in, first-out) policy. SCHED_RR allocates concurrent processes that have the same priority in a round-robin rotation. In this way, each process is assigned a timeslice. The sched_rr_get_interval() function reports the timeslice allocated to each process. Note Though POSIX requires that this function must work only with processes that are configured to run with the SCHED_RR scheduler policy, the sched_rr_get_interval() function can retrieve the timeslice length of any process on Linux. Timeslice information is returned as a timespec . This is the number of seconds and nanoseconds since the base time of 00:00:00 GMT, 1 January 1970: Procedure Create the sched_timeslice.c source file and open it in a text editor. Add the following lines to the sched_timeslice.c file. Save the file and exit the editor. Compile the program. Run the program with varying policies and priorities. Additional resources nice(2) , getpriority(2) , and setpriority(2) man pages on your system 34.6. Displaying the scheduling policy and associated attributes for a process The sched_getattr() function queries the scheduling policy currently applied to the specified process, identified by PID. If PID equals to zero, the policy of the calling process is retrieved. The size argument should reflect the size of the sched_attr structure as known to userspace. The kernel fills out sched_attr::size to the size of its sched_attr structure. If the input structure is smaller, the kernel returns values outside the provided space. As a result, the system call fails with an E2BIG error. The other sched_attr fields are filled out as described in The sched_attr structure . Procedure Create the sched_timeslice.c source file and open it in a text editor. Add the following lines to the sched_timeslice.c file. Compile the sched_timeslice.c file. Check the output of the sched_timeslice program. 34.7. The sched_attr structure The sched_attr structure contains or defines a scheduling policy and its associated attributes for a specified thread. The sched_attr structure has the following form: sched_attr data structure size The thread size in bytes. If the size of the structure is smaller than the kernel structure, additional fields are then assumed to be 0 . If the size is larger than the kernel structure, the kernel verifies all additional fields as 0 . Note The sched_setattr() function fails with E2BIG error when sched_attr structure is larger than the kernel structure and updates size to contain the size of the kernel structure. sched_policy The scheduling policy sched_flags Helps control scheduling behavior when a process forks using the fork() function. The calling process is referred to as the parent process, and the new process is referred to as the child process. Valid values: 0 : The child process inherits the scheduling policy from the parent process. SCHED_FLAG_RESET_ON_FORK: fork() : The child process does not inherit the scheduling policy from the parent process. Instead, it is set to the default scheduling policy (struct sched_attr){ .sched_policy = SCHED_OTHER, } . sched_nice Specifies the nice value to be set when using SCHED_OTHER or SCHED_BATCH scheduling policies. The nice value is a number in a range from -20 (high priority) to +19 (low priority). sched_priority Specifies the static priority to be set when scheduling SCHED_FIFO or SCHED_RR . For other policies, specify priority as 0 . SCHED_DEADLINE fields must be specified only for deadline scheduling: sched_runtime : Specifies the runtime parameter for deadline scheduling. The value is expressed in nanoseconds. sched_deadline : Specifies the deadline parameter for deadline scheduling. The value is expressed in nanoseconds. sched_period : Specifies the period parameter for deadline scheduling. The value is expressed in nanoseconds. | [
"chrt -p 468 pid 468's current scheduling policy: SCHED_FIFO pid 468's current scheduling priority: 85 chrt -p 476 pid 476's current scheduling policy: SCHED_OTHER pid 476's current scheduling priority: 0",
"{EDITOR} get_sched.c",
"#include <sched.h> #include <unistd.h> #include <stdio.h> int main() { int policy; pid_t pid = getpid(); policy = sched_getscheduler(pid); printf(\"Policy for pid %ld is %i.\\n\", (long) pid, policy); return 0; }",
"gcc get_sched.c -o get_sched",
"chrt -o 0 ./get_sched Policy for pid 27240 is 0. chrt -r 10 ./get_sched Policy for pid 27243 is 2. chrt -f 10 ./get_sched Policy for pid 27245 is 1.",
"{EDITOR} sched_get.c",
"#include <stdio.h> #include <unistd.h> #include <sched.h> int main() { printf(\"Valid priority range for SCHED_OTHER: %d - %d\\n\", sched_get_priority_min(SCHED_OTHER), sched_get_priority_max(SCHED_OTHER)); printf(\"Valid priority range for SCHED_FIFO: %d - %d\\n\", sched_get_priority_min(SCHED_FIFO), sched_get_priority_max(SCHED_FIFO)); printf(\"Valid priority range for SCHED_RR: %d - %d\\n\", sched_get_priority_min(SCHED_RR), sched_get_priority_max(SCHED_RR)); return 0; }",
"gcc sched_get.c -o msched_get",
"struct timespec { time_t tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ }",
"{EDITOR} sched_timeslice.c",
"#include <stdio.h> #include <sched.h> int main() { struct timespec ts; int ret; /* real apps must check return values */ ret = sched_rr_get_interval(0, &ts); printf(\"Timeslice: %lu.%lu\\n\", ts.tv_sec, ts.tv_nsec); return 0; }",
"gcc sched_timeslice.c -o sched_timeslice",
"chrt -o 0 ./sched_timeslice Timeslice: 0.38994072 chrt -r 10 ./sched_timeslice Timeslice: 0.99984800 chrt -f 10 ./sched_timeslice Timeslice: 0.0",
"{EDITOR} sched_timeslice.c",
"#define _GNU_SOURCE #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <linux/unistd.h> #include <linux/kernel.h> #include <linux/types.h> #include <sys/syscall.h> #include <pthread.h> #define gettid() syscall(__NR_gettid) #define SCHED_DEADLINE 6 /* XXX use the proper syscall numbers */ #ifdef __x86_64__ #define __NR_sched_setattr 314 #define __NR_sched_getattr 315 #endif struct sched_attr { __u32 size; __u32 sched_policy; __u64 sched_flags; /* SCHED_NORMAL, SCHED_BATCH */ __s32 sched_nice; /* SCHED_FIFO, SCHED_RR */ __u32 sched_priority; /* SCHED_DEADLINE (nsec) */ __u64 sched_runtime; __u64 sched_deadline; __u64 sched_period; }; int sched_getattr(pid_t pid, struct sched_attr *attr, unsigned int size, unsigned int flags) { return syscall(__NR_sched_getattr, pid, attr, size, flags); } int main (int argc, char **argv) { struct sched_attr attr; unsigned int flags = 0; int ret; ret = sched_getattr(0, &attr, sizeof(attr), flags); if (ret < 0) { perror(\"sched_getattr\"); exit(-1); } printf(\"main thread pid=%ld\\n\", gettid()); printf(\"main thread policy=%ld\\n\", attr.sched_policy); printf(\"main thread nice=%ld\\n\", attr.sched_nice); printf(\"main thread priority=%ld\\n\", attr.sched_priority); printf(\"main thread runtime=%ld\\n\", attr.sched_runtime); printf(\"main thread deadline=%ld\\n\", attr.sched_deadline); printf(\"main thread period=%ld\\n\", attr.sched_period); return 0; }",
"gcc sched_timeslice.c -o sched_timeslice",
"./sched_timeslice main thread pid=321716 main thread policy=6 main thread nice=0 main thread priority=0 main thread runtime=1000000 main thread deadline=9000000 main thread period=10000000",
"struct sched_attr { u32 size; u32 sched_policy; u64 sched_flags; s32 sched_nice; u32 sched_priority; /* SCHED_DEADLINE fields */ u64 sched_runtime; u64 sched_deadline; u64 sched_period; };"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_displaying-the-priority-for-a-process_optimizing-RHEL9-for-real-time-for-low-latency-operation |
10.2. Design Example: A Multinational Enterprise and Its Extranet | 10.2. Design Example: A Multinational Enterprise and Its Extranet This example builds a directory infrastructure for Example Corp. International. The Example Corp. from the example has grown into a large, multinational company. This example builds on the directory structure created in the last example for Example Corp., expanding the directory design to meet its new needs. Example Corp. has grown into an organization dispersed over three main geographic locations: the US, Europe, and Asia. Example Corp. now has more than 20,000 employees, all of whom live and work in the countries where the Example Corp. offices are located. Example Corp. decides to launch a company-wide LDAP directory to improve internal communication, to make it easier to develop and deploy web applications, and to increase security and privacy. Designing a directory tree for an international corporation involves determining how to collect directory entries logically, how to support data management, and how to support replication on a global scale. In addition, Example Corp. wants to create an extranet for use by its parts suppliers and trading partners. An extranet is an extension of an enterprise's intranet to external clients. The following sections describe the steps in the process of deploying a multinational directory service and extranet for Example Corp. International. 10.2.1. Multinational Enterprise Data Design Example Corp. International creates a deployment team to perform a site survey. The deployment team determines the following from the site survey: A messaging server is used to provide email routing, delivery, and reading services for most of Example Corp.'s sites. An enterprise server provides document publishing services. All servers run on Red Hat Enterprise Linux 7. Example Corp. needs to allow data to be managed locally. For example, the European site will be responsible for managing the Europe branch of the directory. This also means that Europe will be responsible for the main copy of its data. Because of the geographic distribution of Example Corp.'s offices, the directory needs to be available to users and applications 24 hours a day. Many of the data elements need to accommodate data values of several different languages. Note All data use the UTF-8 characterset; any other characterset violates LDAP standards. The deployment team also determines the following about the data design of the extranet: Parts suppliers need to log in to Example Corp.'s directory to manage their contracts with Example Corp. Parts suppliers depend on data elements used for authentication, such as name and user password. Example Corp.'s partners will use the directory to look up contact details of people in the partner network, such as email addresses and phone numbers. 10.2.2. Multinational Enterprise Schema Design Example Corp. builds upon its original schema design by adding schema elements to support the extranet. Example Corp. adds two new objects, the exampleSupplier object class and the examplePartner object class. The exampleSupplier object class allows one attribute, the exampleSupplierID attribute. This attribute contains the unique ID assigned by Example Corp. International to each automobile parts supplier with which it works. The examplePartner object class allows one attribute, the examplePartnerID attribute. This attribute contains the unique ID assigned by Example Corp. International to each trade partner. For information about customizing the default directory schema, see Section 3.4, "Customizing the Schema" . 10.2.3. Multinational Enterprise Directory Tree Design Based on the expanded requirements, Example Corp. creates the following directory tree: The root of the directory tree is the dc=com suffix. Under this suffix, Example Corp. creates two branches. One branch, dc=exampleCorp,dc=com , contains data internal to Example Corp. International. The other branch, dc=exampleNet,dc=com , contains data for the extranet. The directory tree for the intranet (under dc=exampleCorp,dc=com) has three main branches, each corresponding to one of the regions where Example Corp. has offices. These branches are identified using the l (locality) attribute. Each main branch under dc=exampleCorp,dc=com mimics the original directory tree design of Example Corp. Under each locality, Example Corp. creates an ou=people , an ou=groups , an ou=roles , and an ou=resources branch. See Figure 10.1, "Directory Tree for Example Corp." for more information about this directory tree design. Under the dc=exampleNet,dc=com branch, Example Corp. creates three branches. One branch for suppliers ( o=suppliers ), one branch for partners ( o=partners ), and one branch for groups ( ou=groups ). The ou=groups branch of the extranet contains entries for the administrators of the extranet as well as for mailing lists that partners subscribe to for up-to-date information on automobile parts manufacturing. The following diagram illustrates the basic directory tree resulting from the design steps listed above: Figure 10.6. Basic Directory Tree for Example Corp. International The following diagram illustrates the directory tree for the Example Corp. intranet: Figure 10.7. Directory Tree for Example Corp. International's Intranet The entry for the l=Asia entry appears in LDIF as follows: The following diagram illustrates the directory tree for Example Corp.'s extranet: Figure 10.8. Directory Tree for Example Corp. International's Extranet 10.2.4. Multinational Enterprise Topology Design At this point, Example Corp. designs its database and server topologies. The following sections describe each topology in more detail. 10.2.4.1. Database Topology The following diagram illustrates the database topology of one of Example Corp.'s main localities, Europe: Figure 10.9. Database Topology for Example Corp. Europe The database links point to databases stored locally in each country. For example, operation requests received by the Example Corp. Europe server for the data under the l=US branch are chained by a database link to a database on a server in Austin, Texas. For more information about database links and chaining, see Section 6.3.2, "Using Chaining" . The main copy of the data for dc=exampleCorp,dc=com and the root entry, dc=com , is stored in the l=Europe database. The data center in Europe contains the main copies of the data for the extranet. The extranet data is stored in three databases, one for each of the main branches. The main copy of the data for o=suppliers is stored in database one (DB1), that for o=partners is stored in database two (DB2), and that for ou=groups is stored in database three (DB3). The database topology for the extranet is illustrated below: Figure 10.10. Database Topology for Example Corp. International's Extranet 10.2.4.2. Server Topology Example Corp. develops two server topologies, one for the corporate intranet and one for the partner extranet. For the intranet, Example Corp. decides to have a supplier database for each major locality. This means it has three data centers, each containing two supplier servers, two hub servers, and three consumer servers. The following diagram illustrates the architecture of Example Corp. Europe's data center: Figure 10.11. Server Topology for Example Corp. Europe The data supplier for Example Corp.'s extranet is in Europe. This data is replicated to two consumer servers in the US data center and two consumer servers in the Asia data center. Overall, Example Corp. requires ten servers to support the extranet. The following diagram illustrates the server architecture of Example Corp.'s extranet in the European data center: Figure 10.12. Server Topology for Example Corp. International's Extranet The hub servers replicate data to two consumer servers in each of the data centers in Europe, the US and Asia. 10.2.5. Multinational Enterprise Replication Design Example Corp. considers the following points when designing replication for its directory: Data will be managed locally. The quality of network connections varies from site to site. Database links will be used to connect data on remote servers. Hub servers that contain read-only copies of the data will be used to replicate data to consumer servers. The hub servers are located near important directory-enabled applications such as a mail server or a web server. Hub servers remove the burden of replication from the supplier servers, so the supplier servers can focus on write operations. In the future, as Example Corp. expands and needs to add more consumer servers, the additional consumers do not affect the performance of the supplier servers. For more information on hub servers, see Section 7.2.3, "Cascading Replication" . 10.2.5.1. Supplier Architecture For the Example Corp. intranet, each locality stores the main copy of its data and uses database links to chain to the data of other localities. For the main copy of its data, each locality uses a multi-supplier replication architecture. The following diagram illustrates the supplier architecture for Europe, which includes the dc=exampleCorp,dc=com and dc=com information: Figure 10.13. Supplier Architecture for Example Corp. Europe Each locality contains two suppliers, which share main copies of the data for that site. Each locality is therefore responsible for the main copy of its own data. Using a multi-supplier architecture ensures the availability of the data and helps balance the workload managed by each supplier server. To reduce the risk of total failure, Example Corp. uses multiple read-write supplier Directory Servers at each site. The following diagram illustrates the interaction between the two supplier servers in Europe and the two supplier servers in the US: Figure 10.14. Multi-Supplier Replication Design for Example Corp. Europe and Example Corp. US The same relationship exists between Example Corp. US and Example Corp. Asia, and between Example Corp. Europe and Example Corp. Asia. 10.2.6. Multinational Enterprise Security Design Example Corp. International builds upon its security design, adding the following access controls to support its new multinational intranet: Example Corp. adds general ACIs to the root of the intranet, creating more restrictive ACIs in each country and the branches beneath each country. Example Corp. decides to use macro ACIs to minimize the number of ACIs in the directory. Example Corp. uses a macro to represent a DN in the target or bind rule portion of the ACI. When the directory gets an incoming LDAP operation, the ACI macros are matched against the resource targeted by the LDAP operation. If there is a match, the macro is replaced by the value of the DN of the targeted resource. For more information about macro ACIs, see the Red Hat Directory Server Administrator's Guide . Example Corp. adds the following access controls to support its extranet: Example Corp. decides to use certificate-based authentication for all extranet activities. When people log in to the extranet, they need a digital certificate. The directory is used to store the certificates. Because the directory stores the certificates, users can send encrypted email by looking up public keys stored in the directory. Example Corp. creates an ACI that forbids anonymous access to the extranet. This protects the extranet from denial of service attacks. Example Corp. wants updates to the directory data to come only from an Example Corp. hosted application. This means that partners and suppliers using the extranet can only use the tools provided by Example Corp. Restricting extranet users to Example Corp.'s preferred tools allows Example Corp. administrators to use the audit logs to track the use of the directory and limits the types of problems that can be introduced by extranet users outside of Example Corp. International. | [
"dn: l=Asia,dc=exampleCorp,dc=com objectclass: top objectclass: locality l: Asia description: includes all sites in Asia"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/design_example_a_multinational_enterprise_and_its_extranet |
Chapter 2. Installation | Chapter 2. Installation This chapter guides you through the steps to install AMQ JMS Pool in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ JMS Pool, you must install Apache Maven . To use AMQ JMS Pool, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> <version>1.1.1.redhat-00003</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ JMS Pool can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Clients 2.8.0 JMS Pool Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-clients-2.8.0-jms-pool-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" . 2.4. Installing the examples Procedure Use the git clone command to clone the source repository to a local directory named pooled-jms : USD git clone https://github.com/messaginghub/pooled-jms.git pooled-jms Change to the pooled-jms directory and use the git checkout command to switch to the 1.1.1 branch: USD cd pooled-jms USD git checkout 1.1.1 The resulting local directory is referred to as <source-dir> throughout this document. | [
"<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>",
"<dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> <version>1.1.1.redhat-00003</version> </dependency>",
"unzip amq-clients-2.8.0-jms-pool-maven-repository.zip",
"git clone https://github.com/messaginghub/pooled-jms.git pooled-jms",
"cd pooled-jms git checkout 1.1.1"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_pool_library/installation |
Chapter 57. Configuring an embedded decision engine in Oracle WebLogic Server | Chapter 57. Configuring an embedded decision engine in Oracle WebLogic Server A decision engine is a light-weight rule engine that enables you to execute your decisions and business processes. A decision engine can be part of a Red Hat Decision Manager application or it can be deployed as a service through OpenShift, Kubernetes, and Docker. You can embed a decision engine in a Red Hat Decision Manager application through the API or as a set of contexts and dependency injection (CDI) services. If you intend to use an embedded engine with your Red Hat Process Automation Manager application, you must add Maven dependencies to your project by adding the Red Hat Business Automation bill of materials (BOM) files to the project's pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Procedure Declare the Red Hat Business Automation BOM in the pom.xml file: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies> Declare dependencies required for your project in the <dependencies> tag. After you import the product BOM into your project, the versions of the user-facing product dependencies are defined so you do not need to specify the <version> sub-element of these <dependency> elements. However, you must use the <dependency> element to declare dependencies which you want to use in your project. For a basic Red Hat Decision Manager project, declare the following dependencies, depending on the features that you want to use: Embedded decision engine dependencies <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency> To use KIE Server, declare the following dependencies: Client application KIE Server dependencies <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency> To create a remote client for Red Hat Process Automation Manager, declare the following dependency: Client dependency <dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency> When creating a JAR file that includes assets, such as rules and process definitions, specify the packaging type for your Maven project as kjar and use org.kie:kie-maven-plugin to process the kjar packaging type located under the <project> element. In the following example, USD{kie.version} is the Maven library version listed in What is the mapping between Red Hat Process Automation Manager and the Maven library version? : <packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build> If you use a decision engine with persistence support in your project, you must declare the following hibernate dependencies in the dependencyManagement section of your pom.xml file by copying the version.org.hibernate-4ee7 property from the Red Hat Business Automation BOM file: Hibernate dependencies in decision engine with persistence <!-- hibernate dependencies --> <dependencyManagement> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> </dependencies> </dependencyManagement> | [
"<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies>",
"<dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency>",
"<dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency>",
"<dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency>",
"<packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>",
"<!-- hibernate dependencies --> <dependencyManagement> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>USD{version.org.hibernate-4ee7}</version> </dependency> </dependencies> </dependencyManagement>"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/wls-configure-embedded-engine-proc |
Chapter 3. Differences between OpenShift Container Platform 3 and 4 | Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.11 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.11 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.11, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.11 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.11, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.11 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.11. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.11. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.11 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.11 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Data Foundation OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Data Foundation 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Data Foundation and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.11: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.11. For more information, see Understanding persistent storage . Migration of in-tree volumes to CSI drivers OpenShift Container Platform 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In OpenShift Container Platform 4.11, CSI drivers are the new default for the following in-tree volume types: Azure Disk OpenStack Cinder All aspects of volume lifecycle, such as creation, deletion, mounting, and unmounting, is handled by the CSI driver. For more information, see CSI automatic migration . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.11. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.11 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.11 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.11, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.11. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. For more information, see Installing OpenShift Logging . Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. For more information, see About OpenShift Logging . Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.11. For more information on the explicitly unsupported logging cases, see the logging support documentation . 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.11. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.11. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.11 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.11. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. Default security context constraints The restricted security context constraints (SCC) in OpenShift Container Platform 4 can no longer be accessed by any authenticated user as the restricted SCC in OpenShift Container Platform 3.11. The broad authenticated access is now granted to the restricted-v2 SCC, which is more restrictive than the old restricted SCC. The restricted SCC still exists; users that want to use it must be specifically given permissions to do it. For more information, see Managing security context constraints . 3.3.5. Monitoring considerations Review the following monitoring changes when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.11. You cannot migrate Hawkular configurations and metrics to Prometheus. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Applying custom Alertmanager configuration . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migrating_from_version_3_to_4/planning-migration-3-4 |
Chapter 38. Scheduling problems on the real-time kernel and solutions | Chapter 38. Scheduling problems on the real-time kernel and solutions Scheduling in the real-time kernel might have consequences sometimes. By using the information provided, you can understand the problems on scheduling policies, scheduler throttling, and thread starvation states on the real-time kernel, as well as potential solutions. 38.1. Scheduling policies for the real-time kernel The real-time scheduling policies share one main characteristic: they run until a higher priority thread interrupts the thread or the threads wait, either by sleeping or performing I/O. In the case of SCHED_RR , the operating system interrupts a running thread so that another thread of equal SCHED_RR priority can run. In either of these cases, no provision is made by the POSIX specifications that define the policies for allowing lower priority threads to get any CPU time. This characteristic of real-time threads means that it is easy to write an application, which monopolizes 100% of a given CPU. However, this causes problems for the operating system. For example, the operating system is responsible for managing both system-wide and per-CPU resources and must periodically examine data structures describing these resources and perform housekeeping activities with them. But if a core is monopolized by a SCHED_FIFO thread, it cannot perform its housekeeping tasks. Eventually the entire system becomes unstable and can potentially crash. On the RHEL for Real Time kernel, interrupt handlers run as threads with a SCHED_FIFO priority. The default priority is 50. A cpu-hog thread with a SCHED_FIFO or SCHED_RR policy higher than the interrupt handler threads can prevent interrupt handlers from running. This causes the programs waiting for data signaled by those interrupts to starve and fail. 38.2. Scheduler throttling in the real-time kernel The real-time kernel includes a safeguard mechanism to enable allocating bandwidth for use by the real-time tasks. The safeguard mechanism is known as real-time scheduler throttling. The default values for the real-time throttling mechanism define that the real-time tasks can use 95% of the CPU time. The remaining 5% will be devoted to non real-time tasks, such as tasks running under SCHED_OTHER and similar scheduling policies. It is important to note that if a single real-time task occupies the 95% CPU time slot, the remaining real-time tasks on that CPU will not run. Only the non real-time tasks use the remaining 5% of CPU time. The default values can have the following performance impacts: The real-time tasks have at most 95% of CPU time available for them, which can affect their performance. The real-time tasks do not lock up the system by not allowing non real-time tasks to run. The real-time scheduler throttling is controlled by the following parameters in the /proc file system: The /proc/sys/kernel/sched_rt_period_us parameter Defines the period in ms (microseconds), which is 100% of the CPU bandwidth. The default value is 1,000,000 ms, which is 1 second. Changes to the period's value must be carefully considered because a period value that is either very high or low can cause problems. The /proc/sys/kernel/sched_rt_runtime_us parameter Defines the total bandwidth available for all real-time tasks. The default value is 950,000 ms (0.95 s), which is 95% of the CPU bandwidth. Setting the value to -1 configures the real-time tasks to use up to 100% of CPU time. This is only adequate when the real-time tasks are well engineered and have no obvious caveats, such as unbounded polling loops. 38.3. Thread starvation in the real-time kernel Thread starvation occurs when a thread is on a CPU run queue for longer than the starvation threshold and does not make progress. A common cause of thread starvation is to run a fixed-priority polling application, such as SCHED_FIFO or SCHED_RR bound to a CPU. Since the polling application does not block for I/O, this can prevent other threads, such as kworkers , from running on that CPU. An early attempt to reduce thread starvation is called as real-time throttling. In real-time throttling, each CPU has a portion of the execution time dedicated to non real-time tasks. The default setting for throttling is on with 95% of the CPU for real-time tasks and 5% reserved for non real-time tasks. This works if you have a single real-time task causing starvation but does not work if there are multiple real-time tasks assigned to a CPU. You can work around the problem by using: The stalld mechanism The stalld mechanism is an alternative for real-time throttling and avoids some of the throttling drawbacks. stalld is a daemon to periodically monitor the state of each thread in the system and looks for threads that are on the run queue for a specified length of time without being run. stalld temporarily changes that thread to use the SCHED_DEADLINE policy and allocates the thread a small slice of time on the specified CPU. The thread then runs, and when the time slice is used, the thread returns to its original scheduling policy and stalld continues to monitor thread states. Housekeeping CPUs are CPUs that run all daemons, shell processes, kernel threads, interrupt handlers, and all work that can be dispatched from an isolated CPU. For housekeeping CPUs with real-time throttling disabled, stalld monitors the CPU that runs the main workload and assigns the CPU with the SCHED_FIFO busy loop, which helps to detect stalled threads and improve the thread priority as required with a previously defined acceptable added noise. stalld can be a preference if the real-time throttling mechanism causes an unreasonable noise in the main workload. With stalld , you can more precisely control the noise introduced by boosting starved threads. The shell script /usr/bin/throttlectl automatically disables real-time throttling when stalld is run. You can list the current throttling values by using the /usr/bin/throttlectl show script. Disabling real-time throttling The following parameters in the /proc filesystem control real-time throttling: The /proc/sys/kernel/sched_rt_period_us parameter specifies the number of microseconds in a period and defaults to 1 million, which is 1 second. The /proc/sys/kernel/sched_rt_runtime_us parameter specifies the number of microseconds that can be used by a real-time task before throttling occurs and it defaults to 950,000 or 95% of the available CPU cycles. You can disable throttling by passing a value of -1 into the sched_rt_runtime_us file by using the echo -1 > /proc/sys/kernel/sched_rt_runtime_us command. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/scheduling-problems-on-the-real-time-kernel-and-solutions_optimizing-rhel9-for-real-time-for-low-latency-operation |
Customizing Red Hat Trusted Application Pipeline | Customizing Red Hat Trusted Application Pipeline Red Hat Trusted Application Pipeline 1.0 Learn how to customize default software templates and build pipeline configurations. Red Hat Trusted Application Pipeline Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/customizing_red_hat_trusted_application_pipeline/index |
Chapter 13. Changing service account passwords using director Operator | Chapter 13. Changing service account passwords using director Operator Red Hat OpenStack Platform (RHOSP) services and the databases that they use are authenticated by their Identity service (keystone) credentials. The Identity service generates these RHOSP passwords during the initial RHOSP deployment process. You might be required to periodically update passwords for threat mitigation or security compliance. You can use tools native to director Operator (OSPdO) to change many of the generated passwords after your RHOSP environment is deployed. 13.1. Rotating overcloud service account passwords with director Operator You can rotate the overcloud service account passwords used with a director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment. Procedure Create a backup of the current tripleo-passwords secret: Create a plain text file named tripleo-overcloud-passwords_preserve_list to specify that the passwords for the following services should not be rotated: You can add additional services to this list if there are other services for which you want to preserve the password. Create a password parameter file, tripleo-overcloud-passwords.yaml , that lists the passwords that should not be modified: Validate that the tripleo-overcloud-passwords.yaml file contains the passwords that you do not want to rotate. Update the tripleo-password secret: Create Ansible playbooks to configure the overcloud with the OpenStackConfigGenerator CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD . Apply the updated configuration. For more information, see Applying overcloud configuration with director Operator . Verification Compare the new NovaPassword in the secret to what is now installed on the Controller node. Get the password from the updated secret: Example output: Retrieve the password for the Compute service (nova) running on the Controller nodes: Access the openstackclient remote shell: Ensure that you are in the home directory: Retrieve the Compute service password: Example output: | [
"oc get secret tripleo-passwords -n openstack -o yaml > tripleo-passwords_backup.yaml",
"parameter_defaults BarbicanSimpleCryptoKek KeystoneCredential0 KeystoneCredential1 KeystoneFernetKey0 KeystoneFernetKey1 KeystoneFernetKeys CephClientKey CephClusterFSID CephManilaClientKey CephRgwKey HeatAuthEncryptionKey MysqlClustercheckPassword MysqlMariabackupPassword PacemakerRemoteAuthkey PcsdPassword",
"oc get secret tripleo-passwords -n openstack -o jsonpath='{.data.tripleo-overcloud-passwords\\.yaml}' | base64 -d | grep -f ./tripleo-overcloud-passwords_preserve_list > tripleo-overcloud-passwords.yaml",
"oc create secret generic tripleo-passwords -n openstack --from-file=./tripleo-overcloud-passwords.yaml --dry-run=client -o yaml | oc apply -f -",
"oc get secret tripleo-passwords -n openstack -o jsonpath='{.data.tripleo-overcloud-passwords\\.yaml}' | base64 -d | grep NovaPassword",
"NovaPassword: hp4xpt7t2p79ktqjjnxpqwbp6",
"oc rsh openstackclient -n openstack",
"cd",
"ansible -i /home/cloud-admin/ctlplane-ansible-inventory Controller -b -a \"grep ^connection /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf\"",
"172.22.0.120 | CHANGED | rc=0 >> connection=mysql+pymysql://nova_api:[email protected]/nova_api?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleo connection=mysql+pymysql://nova:[email protected]/nova?read_default_file=/etc/my.cnf.d/tripleo.cnf&read_default_group=tripleo"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/rhosp_director_operator_for_openshift_container_platform/assembly_changing-service-account-passwords-for-director-operator |
25.2. Adding FCP-Attached Logical Units (LUNs) | 25.2. Adding FCP-Attached Logical Units (LUNs) The following is an example of how to add an FCP LUN. Note If running under z/VM, make sure the FCP adapter is attached to the z/VM guest virtual machine. For multipathing in production environments there would be at least two FCP devices on two different physical adapters (CHPIDs). For example: 25.2.1. Dynamically Activating an FCP LUN Follow these steps to activate a LUN: Use the cio_ignore command to remove the FCP adapter from the list of ignored devices and make it visible to Linux: Replace DeviceNumber with the device number of the FCP adapter. For example: To bring the FCP adapter device online, use the following command: Verify that the required WWPN was found by the automatic port scanning of the zfcp device driver: Activate the FCP LUN by adding it to the port (WWPN) through which you would like to access the LUN: Find out the assigned SCSI device name: For more information, refer to the chapter on SCSI-over-Fibre Channel in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 . | [
"CP ATTACH FC00 TO * CP ATTACH FCD0 TO *",
"cio_ignore -r DeviceNumber",
"chccwdev -e fc00",
"ls -l /sys/bus/ccw/drivers/zfcp/0.0.fc00/ drwxr-xr-x. 3 root root 0 Apr 28 18:19 0x500507630040710b drwxr-xr-x. 3 root root 0 Apr 28 18:19 0x50050763050b073d drwxr-xr-x. 3 root root 0 Apr 28 18:19 0x500507630e060521 drwxr-xr-x. 3 root root 0 Apr 28 18:19 0x500507630e860521 -r--r--r--. 1 root root 4096 Apr 28 18:17 availability -r--r--r--. 1 root root 4096 Apr 28 18:19 card_version -rw-r--r--. 1 root root 4096 Apr 28 18:17 cmb_enable -r--r--r--. 1 root root 4096 Apr 28 18:17 cutype -r--r--r--. 1 root root 4096 Apr 28 18:17 devtype lrwxrwxrwx. 1 root root 0 Apr 28 18:17 driver -> ../../../../bus/ccw/drivers/zfcp -rw-r--r--. 1 root root 4096 Apr 28 18:17 failed -r--r--r--. 1 root root 4096 Apr 28 18:19 hardware_version drwxr-xr-x. 35 root root 0 Apr 28 18:17 host0 -r--r--r--. 1 root root 4096 Apr 28 18:17 in_recovery -r--r--r--. 1 root root 4096 Apr 28 18:19 lic_version -r--r--r--. 1 root root 4096 Apr 28 18:17 modalias -rw-r--r--. 1 root root 4096 Apr 28 18:17 online -r--r--r--. 1 root root 4096 Apr 28 18:19 peer_d_id -r--r--r--. 1 root root 4096 Apr 28 18:19 peer_wwnn -r--r--r--. 1 root root 4096 Apr 28 18:19 peer_wwpn --w-------. 1 root root 4096 Apr 28 18:19 port_remove --w-------. 1 root root 4096 Apr 28 18:19 port_rescan drwxr-xr-x. 2 root root 0 Apr 28 18:19 power -r--r--r--. 1 root root 4096 Apr 28 18:19 status lrwxrwxrwx. 1 root root 0 Apr 28 18:17 subsystem -> ../../../../bus/ccw -rw-r--r--. 1 root root 4096 Apr 28 18:17 uevent",
"echo 0x4020400100000000 > /sys/bus/ccw/drivers/zfcp/0.0.fc00/0x50050763050b073d/unit_add",
"lszfcp -DV /sys/devices/css0/0.0.0015/0.0.fc00/0x50050763050b073d/0x4020400100000000 /sys/bus/ccw/drivers/zfcp/0.0.fc00/host0/rport-0:0-21/target0:0:21/0:0:21:1089355792"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ap-s390info-adding_fcp-attached_luns |
Chapter 9. Configuring outgoing HTTP requests | Chapter 9. Configuring outgoing HTTP requests Red Hat build of Keycloak often needs to make requests to the applications and services that it secures. Red Hat build of Keycloak manages these outgoing connections using an HTTP client. This chapter shows how to configure the client, connection pool, proxy environment settings, timeouts, and more. 9.1. Client Configuration Command The HTTP client that Red Hat build of Keycloak uses for outgoing communication is highly configurable. To configure the Red Hat build of Keycloak outgoing HTTP client, enter this command: bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value> The following are the command options: establish-connection-timeout-millis Maximum time in milliseconds until establishing a connection times out. Default: Not set. socket-timeout-millis Maximum time of inactivity between two data packets until a socket connection times out, in milliseconds. Default: 5000ms connection-pool-size Size of the connection pool for outgoing connections. Default: 128. max-pooled-per-route How many connections can be pooled per host. Default: 64. connection-ttl-millis Maximum connection time to live in milliseconds. Default: Not set. max-connection-idle-time-millis Maximum time an idle connection stays in the connection pool, in milliseconds. Idle connections will be removed from the pool by a background cleaner thread. Set this option to -1 to disable this check. Default: 900000. disable-cookies Enable or disable caching of cookies. Default: true. client-keystore File path to a Java keystore file. This keystore contains client certificates for two-way SSL. client-keystore-password Password for the client keystore. REQUIRED, when client-keystore is set. client-key-password Password for the private key of the client. REQUIRED, when client-keystore is set. proxy-mappings Specify proxy configurations for outgoing HTTP requests. For more details, see Section 9.2, "Proxy mappings for outgoing HTTP requests" . disable-trust-manager If an outgoing request requires HTTPS and this configuration option is set to true, you do not have to specify a truststore. This setting should be used only during development and never in production because it will disable verification of SSL certificates. Default: false. 9.2. Proxy mappings for outgoing HTTP requests To configure outgoing requests to use a proxy, you can use the following standard proxy environment variables to configure the proxy mappings: HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . The HTTP_PROXY and HTTPS_PROXY variables represent the proxy server that is used for outgoing HTTP requests. Red Hat build of Keycloak does not differentiate between the two variables. If you define both variables, HTTPS_PROXY takes precedence regardless of the actual scheme that the proxy server uses. The NO_PROXY variable defines a comma separated list of hostnames that should not use the proxy. For each hostname that you specify, all its subdomains are also excluded from using proxy. The environment variables can be lowercase or uppercase. Lowercase takes precedence. For example, if you define both HTTP_PROXY and http_proxy , http_proxy is used. Example of proxy mappings and environment variables In this example, the following results occur: All outgoing requests use the proxy https://www-proxy.acme.com:8080 except for requests to google.com or any subdomain of google.com, such as auth.google.com. login.facebook.com and all its subdomains do not use the defined proxy, but groups.facebook.com uses the proxy because it is not a subdomain of login.facebook.com. 9.3. Proxy mappings using regular expressions An alternative to using environment variables for proxy mappings is to configure a comma-delimited list of proxy-mappings for outgoing requests sent by Red Hat build of Keycloak. A proxy-mapping consists of a regex-based hostname pattern and a proxy-uri, using the format hostname-pattern;proxy-uri . For example, consider the following regex: You apply a regex-based hostname pattern by entering this command: bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings="'*\\\.(google|googleapis)\\\.com;http://www-proxy.acme.com:8080'" To determine the proxy for the outgoing HTTP request, the following occurs: The target hostname is matched against all configured hostname patterns. The proxy-uri of the first matching pattern is used. If no configured pattern matches the hostname, no proxy is used. When your proxy server requires authentication, include the credentials of the proxy user in the format username:password@ . For example: Example of regular expressions for proxy-mapping: In this example, the following occurs: The special value NO_PROXY for the proxy-uri is used, which means that no proxy is used for hosts matching the associated hostname pattern. A catch-all pattern ends the proxy-mappings, providing a default proxy for all outgoing requests. 9.4. Configuring trusted certificates for TLS connections See Configuring trusted certificates for how to configure a Red Hat build of Keycloak Truststore so that Red Hat build of Keycloak is able to perform outgoing requests using TLS. | [
"bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value>",
"HTTPS_PROXY=https://www-proxy.acme.com:8080 NO_PROXY=google.com,login.facebook.com",
".*\\.(google|googleapis)\\.com",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings=\"'*\\\\\\.(google|googleapis)\\\\\\.com;http://www-proxy.acme.com:8080'\"",
".*\\.(google|googleapis)\\.com;http://proxyuser:[email protected]:8080",
"All requests to Google APIs use http://www-proxy.acme.com:8080 as proxy .*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080 All requests to internal systems use no proxy .*\\.acme\\.com;NO_PROXY All other requests use http://fallback:8080 as proxy .*;http://fallback:8080"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/outgoinghttp- |
15.9. Using GVFS Metadata | 15.9. Using GVFS Metadata GVFS has its metadata storage implemented as a set of simple key/value pairs information bound to a particular file. Thus, there is a tool for a user or application to save small data designed for runtime information such as icon position, last-played location, position in a document, emblems, notes, and so on. Whenever a file or directory is moved, metadata is moved accordingly so that it stays connected to the respective file. GVFS stores all metadata privately so it is available only on the machine. However, GVFS mounts and removable media are tracked as well. Note Removable media are now mounted in the /run/media/ instead of the /media directory. To view and manipulate with metadata, you can use: the gvfs-info command; the gvfs-set-attribute command; or any other native GIO way of working with attributes. In the following example, a custom metadata attribute is set. Notice the differences between particular gvfs-info calls and data persistence after a move or rename (note the gvfs-info command output): Example 15.5. Setting Custom Metadata Attribute | [
"touch /tmp/myfile gvfs-info -a 'metadata::*' /tmp/myfile attributes: gvfs-set-attribute -t string /tmp/myfile 'metadata::mynote' 'Please remember to delete this file!' gvfs-info -a 'metadata::*' /tmp/myfile attributes: metadata::mynote: Please remember to delete this file! gvfs-move /tmp/myfile /tmp/newfile gvfs-info -a 'metadata::*' /tmp/newfile attributes: metadata::mynote: Please remember to delete this file!"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/using-gvfs-metadata |
Chapter 6. ImageStreamLayers [image.openshift.io/v1] | Chapter 6. ImageStreamLayers [image.openshift.io/v1] Description ImageStreamLayers describes information about the layers referenced by images in this image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required blobs images 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources blobs object blobs is a map of blob name to metadata about the blob. blobs{} object ImageLayerData contains metadata about an image layer. images object images is a map between an image name and the names of the blobs and config that comprise the image. images{} object ImageBlobReferences describes the blob references within an image. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 6.1.1. .blobs Description blobs is a map of blob name to metadata about the blob. Type object 6.1.2. .blobs{} Description ImageLayerData contains metadata about an image layer. Type object Required size mediaType Property Type Description mediaType string MediaType of the referenced object. size integer Size of the layer in bytes as defined by the underlying store. This field is optional if the necessary information about size is not available. 6.1.3. .images Description images is a map between an image name and the names of the blobs and config that comprise the image. Type object 6.1.4. .images{} Description ImageBlobReferences describes the blob references within an image. Type object Property Type Description config string config, if set, is the blob that contains the image config. Some images do not have separate config blobs and this field will be set to nil if so. imageMissing boolean imageMissing is true if the image is referenced by the image stream but the image object has been deleted from the API by an administrator. When this field is set, layers and config fields may be empty and callers that depend on the image metadata should consider the image to be unavailable for download or viewing. layers array (string) layers is the list of blobs that compose this image, from base layer to top layer. All layers referenced by this array will be defined in the blobs map. Some images may have zero layers. manifests array (string) manifests is the list of other image names that this image points to. For a single architecture image, it is empty. For a multi-arch image, it consists of the digests of single architecture images, such images shouldn't have layers nor config. 6.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers GET : read layers of the specified ImageStream 6.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers Table 6.1. Global path parameters Parameter Type Description name string name of the ImageStreamLayers HTTP method GET Description read layers of the specified ImageStream Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamLayers schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/image_apis/imagestreamlayers-image-openshift-io-v1 |
Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform | Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform Red Hat Ansible Automation Platform 2.3 This guide provides procedures and reference information for the supported installation scenarios for the Red Hat Ansible Automation Platform operator on OpenShift Container Platform Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/index |
Chapter 3. Setting up and configuring the registry | Chapter 3. Setting up and configuring the registry 3.1. Configuring the registry for AWS user-provisioned infrastructure 3.1.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For S3 on AWS storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry 3.1.2. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 3.1.3. Image Registry Operator configuration parameters for AWS S3 The following configuration parameters are available for AWS S3 registry storage. The image registry spec.storage.s3 configuration parameter holds the information to configure the registry to use the AWS S3 service for back-end storage. See the S3 storage driver documentation for more information. Parameter Description bucket Bucket is the bucket name in which you want to store the registry's data. It is optional and is generated if not provided. region Region is the AWS region in which your bucket exists. It is optional and is set based on the installed AWS Region. regionEndpoint RegionEndpoint is the endpoint for S3 compatible storage services. It is optional and defaults based on the Region that is provided. virtualHostedStyle VirtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint. It is optional and defaults to false. Set this parameter to deploy OpenShift Container Platform to hidden regions. encrypt Encrypt specifies whether or not the registry stores the image in encrypted format. It is optional and defaults to false. keyID KeyID is the KMS key ID to use for encryption. It is optional. Encrypt must be true, or this parameter is ignored. cloudFront CloudFront configures Amazon Cloudfront as the storage middleware in a registry. It is optional. trustedCA The namespace for the config map referenced by trustedCA is openshift-config . The key for the bundle in the config map is ca-bundle.crt . It is optional. Note When the value of the regionEndpoint parameter is configured to a URL of a Rados Gateway, an explicit port must not be specified. For example: regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local 3.2. Configuring the registry for GCP user-provisioned infrastructure 3.2.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For GCS on GCP storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by GCP: REGISTRY_STORAGE_GCS_KEYFILE Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry 3.2.2. Configuring the registry storage for GCP with user-provisioned infrastructure If the Registry Operator cannot create a Google Cloud Platform (GCP) bucket, you must set up the storage medium manually and configure the settings in the registry custom resource (CR). Prerequisites A cluster on GCP with user-provisioned infrastructure. To configure registry storage for GCP, you need to provide Registry Operator cloud credentials. For GCS on GCP storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by GCP: REGISTRY_STORAGE_GCS_KEYFILE Procedure Set up an Object Lifecycle Management policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration # ... storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name> # ... Warning You can secure your registry images that use a Google Cloud Storage bucket by setting public access prevention . 3.2.3. Image Registry Operator configuration parameters for GCP GCS The following configuration parameters are available for GCP GCS registry storage. Parameter Description bucket Bucket is the bucket name in which you want to store the registry's data. It is optional and is generated if not provided. region Region is the GCS location in which your bucket exists. It is optional and is set based on the installed GCS Region. projectID ProjectID is the Project ID of the GCP project that this bucket should be associated with. It is optional. keyID KeyID is the KMS key ID to use for encryption. It is optional because buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. 3.3. Configuring the registry for OpenStack user-provisioned infrastructure You can configure the registry of a cluster that runs on your own Red Hat OpenStack Platform (RHOSP) infrastructure. 3.3.1. Configuring Image Registry Operator redirects By disabling redirects, you can configure the Image Registry Operator to control whether clients such as OpenShift Container Platform cluster builds or external systems like developer machines are redirected to pull images directly from Red Hat OpenStack Platform (RHOSP) Swift storage. This configuration is optional and depends on whether the clients trust the storage's SSL/TLS certificates. Note In situations where clients to not trust the storage certificate, setting the disableRedirect option can be set to true proxies traffic through the image registry. Consequently, however, the image registry might require more resources, especially network bandwidth, to handle the increased load. Alternatively, if clients trust the storage certificate, the registry can allow redirects. This reduces resource demand on the registry itself. Some users might prefer to configure their clients to trust their self-signed certificate authorities (CAs) instead of disabling redirects. If you are using a self-signed CA, you must decide between trusting the custom CAs or disabling redirects. Procedure To ensures that the image registry proxies traffic instead of relying on Swift storage, change the value of the spec.disableRedirect field in the config.imageregistry object to true by running the following command: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"disableRedirect":true}}' 3.3.2. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For Swift on Red Hat OpenStack Platform (RHOSP) storage, the secret is expected to contain the following two keys: REGISTRY_STORAGE_SWIFT_USERNAME REGISTRY_STORAGE_SWIFT_PASSWORD Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry 3.3.3. Registry storage for RHOSP with user-provisioned infrastructure If the Registry Operator cannot create a Swift bucket, you must set up the storage medium manually and configure the settings in the registry custom resource (CR). Prerequisites A cluster on Red Hat OpenStack Platform (RHOSP) with user-provisioned infrastructure. To configure registry storage for RHOSP, you need to provide Registry Operator cloud credentials. For Swift on RHOSP storage, the secret is expected to contain the following two keys: REGISTRY_STORAGE_SWIFT_USERNAME REGISTRY_STORAGE_SWIFT_PASSWORD Procedure Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration # ... storage: swift: container: <container-id> # ... 3.3.4. Image Registry Operator configuration parameters for RHOSP Swift The following configuration parameters are available for Red Hat OpenStack Platform (RHOSP) Swift registry storage. Parameter Description authURL Defines the URL for obtaining the authentication token. This value is optional. authVersion Specifies the Auth version of RHOSP, for example, authVersion: "3" . This value is optional. container Defines the name of a Swift container for storing registry data. This value is optional. domain Specifies the RHOSP domain name for the Identity v3 API. This value is optional. domainID Specifies the RHOSP domain ID for the Identity v3 API. This value is optional. tenant Defines the RHOSP tenant name to be used by the registry. This value is optional. tenantID Defines the RHOSP tenant ID to be used by the registry. This value is optional. regionName Defines the RHOSP region in which the container exists. This value is optional. 3.4. Configuring the registry for Azure user-provisioned infrastructure 3.4.1. Configuring a secret for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. For Azure registry storage, the secret is expected to contain one key whose value is the contents of a credentials file provided by Azure: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an OpenShift Container Platform secret that contains the required key. USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry 3.4.2. Configuring registry storage for Azure During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. Prerequisites A cluster on Azure with user-provisioned infrastructure. To configure registry storage for Azure, provide Registry Operator cloud credentials. For Azure storage the secret is expected to contain one key: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an Azure storage container . Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: azure: accountName: <storage-account-name> container: <container-name> 3.4.3. Configuring registry storage for Azure Government During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage. Prerequisites A cluster on Azure with user-provisioned infrastructure in a government region. To configure registry storage for Azure, provide Registry Operator cloud credentials. For Azure storage, the secret is expected to contain one key: REGISTRY_STORAGE_AZURE_ACCOUNTKEY Procedure Create an Azure storage container . Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1 1 cloudName is the name of the Azure cloud environment, which can be used to configure the Azure SDK with the appropriate Azure API endpoints. Defaults to AzurePublicCloud . You can also set cloudName to AzureUSGovernmentCloud , AzureChinaCloud , or AzureGermanCloud with sufficient credentials. 3.5. Configuring the registry for RHOSP 3.5.1. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Configuring the registry for bare metal 3.6.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.6.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.6.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.6.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 3.6.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.6.3.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.6.3.4. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.6.3.5. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.6.4. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.6.5. Additional resources Recommended configurable storage technology Configuring Image Registry to use OpenShift Data Foundation 3.7. Configuring the registry for vSphere 3.7.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.7.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.7.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.7.3.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.7.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.7.3.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.7.3.4. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.7.3.5. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.7.4. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.7.5. Additional resources Recommended configurable storage technology Configuring Image Registry to use OpenShift Data Foundation 3.8. Configuring the registry for Red Hat OpenShift Data Foundation To configure the OpenShift image registry on bare metal and vSphere to use Red Hat OpenShift Data Foundation storage, you must install OpenShift Data Foundation and then configure image registry using Ceph or Noobaa. 3.8.1. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.8.2. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.8.3. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.8.4. Additional resources Configuring Image Registry to use OpenShift Data Foundation Performance tuning guide for Multicloud Object Gateway (NooBaa) 3.9. Configuring the registry for Nutanix By following the steps outlined in this documentation, users can optimize container image distribution, security, and access controls, enabling a robust foundation for Nutanix applications on OpenShift Container Platform 3.9.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.9.2. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.9.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.9.3.1. Configuring registry storage for Nutanix As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on Nutanix. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. You must have 100 Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m 3.9.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.9.3.3. Configuring block registry storage for Nutanix volumes To allow the image registry to use block storage types such as Nutanix volumes during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a Nutanix PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.9.3.4. Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Ceph RGW storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage. Procedure Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Ceph RGW object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.9.3.5. Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use Noobaa storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage. Procedure Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example: cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF 1 Alternatively, you can use the openshift-image-registry namespace. Get the bucket name by entering the following command: USD bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}') Get the AWS credentials by entering the following commands: USD AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print USD2}' | base64 --decode) USD AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print USD2}' | base64 --decode) Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command: USD oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry Get the route host by entering the following command: USD route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}') Create a config map that uses an ingress certificate by entering the following commands: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm USD oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config Configure the image registry to use the Nooba object storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"USD{bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://USD{route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge 3.9.4. Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry: Ceph, a shared and distributed file system and on-premises object storage NooBaa, providing a Multicloud Object Gateway This document outlines the procedure to configure the image registry to use CephFS storage. Note CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. You installed the oc CLI. You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage. Procedure Create a PVC to use the cephfs storage class. For example: cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF Configure the image registry to use the CephFS file system storage by entering the following command: USD oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge 3.9.5. Additional resources Recommended configurable storage technology Configuring Image Registry to use OpenShift Data Foundation | [
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local",
"oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: swift: container: <container-id>",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/registry/setting-up-and-configuring-the-registry |
Chapter 119. KafkaMirrorMakerSpec schema reference | Chapter 119. KafkaMirrorMakerSpec schema reference Used in: KafkaMirrorMaker Full list of KafkaMirrorMakerSpec schema properties Configures Kafka MirrorMaker. 119.1. include Use the include property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster. The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using A|B or all topics using * . You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker. 119.2. KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec Use the KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec to configure source (consumer) and target (producer) clusters. Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT pairs. Each comma-separated list contains one or more Kafka brokers or a Service pointing to Kafka brokers specified as a HOSTNAME:PORT pair. 119.3. logging Kafka MirrorMaker has its own configurable logger: mirrormaker.root.logger MirrorMaker uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: mirrormaker.root.logger: INFO log4j.logger.org.apache.kafka.clients.NetworkClient: TRACE log4j.logger.org.apache.kafka.common.network.Selector: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 119.4. KafkaMirrorMakerSpec schema properties Property Property type Description version string The Kafka MirrorMaker version. Defaults to the latest version. Consult the documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Deployment . image string The container image used for Kafka MirrorMaker pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. consumer KafkaMirrorMakerConsumerSpec Configuration of source cluster. producer KafkaMirrorMakerProducerSpec Configuration of target cluster. resources ResourceRequirements CPU and memory resources to reserve. whitelist string The whitelist property has been deprecated, and should now be configured using spec.include . List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression A|B . Or, as a special case, you can mirror all topics using the regular expression * . You can also specify multiple regular expressions separated by commas. include string List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression A|B . Or, as a special case, you can mirror all topics using the regular expression * . You can also specify multiple regular expressions separated by commas. jvmOptions JvmOptions JVM Options for pods. logging InlineLogging , ExternalLogging Logging configuration for MirrorMaker. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka MirrorMaker. template KafkaMirrorMakerTemplate Template to specify how Kafka MirrorMaker resources, Deployments and Pods , are generated. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: inline loggers: mirrormaker.root.logger: INFO log4j.logger.org.apache.kafka.clients.NetworkClient: TRACE log4j.logger.org.apache.kafka.common.network.Selector: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkamirrormakerspec-reference |
Chapter 5. Mounting an SMB Share | Chapter 5. Mounting an SMB Share The Server Message Block (SMB) protocol implements an application-layer network protocol used to access resources on a server, such as file shares and shared printers. Note In the context of SMB, you can find mentions about the Common Internet File System (CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are supported, and the kernel module and utilities involved in mounting SMB and CIFS shares both use the name cifs . The cifs-utils package provides utilities to: Mount SMB and CIFS shares Manage NT LAN Manager (NTLM) credentials in the kernel's keyring Set and display Access Control Lists (ACL) in a security descriptor on SMB and CIFS shares 5.1. Supported SMB protocol versions The cifs.ko kernel module supports the following SMB protocol versions: SMB 1 Warning The SMB1 protocol is deprecated due to known security issues, and is only safe to use on a private network . The main reason that SMB1 is still provided as a supported option is that currently it is the only SMB protocol version that supports UNIX extensions. If you do not need to use UNIX extensions on SMB, Red Hat strongly recommends using SMB2 or later. SMB 2.0 SMB 2.1 SMB 3.0 SMB 3.1.1 Note Depending on the protocol version, not all SMB features are implemented. 5.2. UNIX extensions support Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature. These extensions are also supported by the cifs.ko kernel module. However, both Samba and the kernel module support UNIX extensions only in the SMB 1 protocol. Prerequisites The cifs-utils package is installed. Procedure Set the server min protocol parameter in the [global] section in the /etc/samba/smb.conf file to NT1 . Mount the share using the SMB 1 protocol by providing the -o vers=1.0 option to the mount command. For example: By default, the kernel module uses SMB 2 or the highest later protocol version supported by the server. Passing the -o vers=1.0 option to the mount command forces that the kernel module uses the SMB 1 protocol that is required for using UNIX extensions. Verification Display the options of the mounted share: If the unix entry is displayed in the list of mount options, UNIX extensions are enabled. 5.3. Manually mounting an SMB share If you only require an SMB share to be temporary mounted, you can mount it manually using the mount utility. Note Manually mounted shares are not mounted automatically again when you reboot the system. To configure that Red Hat Enterprise Linux automatically mounts the share when the system boots, see Mounting an SMB share automatically when the system boots . Prerequisites The cifs-utils package is installed. Procedure Use the mount utility with the -t cifs parameter to mount an SMB share: In the -o parameter, you can specify options that are used to mount the share. For details, see the OPTIONS section in the mount.cifs(8) man page and Frequently used mount options . Example 5.1. Mounting a share using an encrypted SMB 3.0 connection To mount the \\server\example\ share as the DOMAIN \Administrator user over an encrypted SMB 3.0 connection into the /mnt/ directory: Verification List the content of the mounted share: 5.4. Mounting an SMB share automatically when the system boots If access to a mounted SMB share is permanently required on a server, mount the share automatically at boot time. Prerequisites The cifs-utils package is installed. Procedure Add an entry for the share to the /etc/fstab file. For example: Important To enable the system to mount a share automatically, you must store the user name, password, and domain name in a credentials file. For details, see Creating a credentials file to authenticate to an SMB share In the fourth field of the row in the /etc/fstab , specify mount options, such as the path to the credentials file. For details, see the OPTIONS section in the mount.cifs(8) man page and Frequently used mount options . Verification Mount the share by specifying the mount point: 5.5. Creating a credentials file to authenticate to an SMB share In certain situations, such as when mounting a share automatically at boot time, a share should be mounted without entering the user name and password. To implement this, create a credentials file. Prerequisites The cifs-utils package is installed. Procedure Create a file, such as /root/smb.cred , and specify the user name, password, and domain name that file: Set the permissions to only allow the owner to access the file: You can now pass the credentials= file_name mount option to the mount utility or use it in the /etc/fstab file to mount the share without being prompted for the user name and password. 5.6. Performing a multi-user SMB mount The credentials you provide to mount a share determine the access permissions on the mount point by default. For example, if you use the DOMAIN \example user when you mount a share, all operations on the share will be executed as this user, regardless which local user performs the operation. However, in certain situations, the administrator wants to mount a share automatically when the system boots, but users should perform actions on the share's content using their own credentials. The multiuser mount options lets you configure this scenario. Important To use the multiuser mount option, you must additionally set the sec mount option to a security type that supports providing credentials in a non-interactive way, such as krb5 or the ntlmssp option with a credentials file. For details, see Accessing a share as a user . The root user mounts the share using the multiuser option and an account that has minimal access to the contents of the share. Regular users can then provide their user name and password to the current session's kernel keyring using the cifscreds utility. If the user accesses the content of the mounted share, the kernel uses the credentials from the kernel keyring instead of the one initially used to mount the share. Using this feature consists of the following steps: Mount a share with the multiuser option . Optionally, verify if the share was successfully mounted with the multiuser option . Access the share as a user . Prerequisites The cifs-utils package is installed. 5.6.1. Mounting a share with the multiuser option Before users can access the share with their own credentials, mount the share as the root user using an account with limited permissions. Procedure To mount a share automatically with the multiuser option when the system boots: Create the entry for the share in the /etc/fstab file. For example: Mount the share: If you do not want to mount the share automatically when the system boots, mount it manually by passing -o multiuser,sec=security_type to the mount command. For details about mounting an SMB share manually, see Manually mounting an SMB share . 5.6.2. Verifying if an SMB share is mounted with the multiuser option To verify if a share is mounted with the multiuser option, display the mount options. Procedure If the multiuser entry is displayed in the list of mount options, the feature is enabled. 5.6.3. Accessing a share as a user If an SMB share is mounted with the multiuser option, users can provide their credentials for the server to the kernel's keyring: When the user performs operations in the directory that contains the mounted SMB share, the server applies the file system permissions for this user, instead of the one initially used when the share was mounted. Note Multiple users can perform operations using their own credentials on the mounted share at the same time. 5.7. Frequently used SMB mount options When you mount an SMB share, the mount options determine: How the connection will be established with the server. For example, which SMB protocol version is used when connecting to the server. How the share will be mounted into the local file system. For example, if the system overrides the remote file and directory permissions to enable multiple local users to access the content on the server. To set multiple options in the fourth field of the /etc/fstab file or in the -o parameter of a mount command, separate them with commas. For example, see Mounting a share with the multiuser option . The following list gives frequently used mount options: Option Description credentials= file_name Sets the path to the credentials file. See Authenticating to an SMB share using a credentials file . dir_mode= mode Sets the directory mode if the server does not support CIFS UNIX extensions. file_mode= mode Sets the file mode if the server does not support CIFS UNIX extensions. password= password Sets the password used to authenticate to the SMB server. Alternatively, specify a credentials file using the credentials option. seal Enables encryption support for connections using SMB 3.0 or a later protocol version. Therefore, use seal together with the vers mount option set to 3.0 or later. See the example in Manually mounting an SMB share . sec= security_mode Sets the security mode, such as ntlmsspi , to enable NTLMv2 password hashing and enabled packet signing. For a list of supported values, see the option's description in the mount.cifs(8) man page on your system. If the server does not support the ntlmv2 security mode, use sec=ntlmssp , which is the default. For security reasons, do not use the insecure ntlm security mode. username= user_name Sets the user name used to authenticate to the SMB server. Alternatively, specify a credentials file using the credentials option. vers= SMB_protocol_version Sets the SMB protocol version used for the communication with the server. For a complete list, see the OPTIONS section in the mount.cifs(8) man page on your system. | [
"mount -t cifs -o vers=1.0,username= <user_name> // <server_name> / <share_name> /mnt/",
"mount // <server_name> / <share_name> on /mnt type cifs (...,unix,...)",
"mount -t cifs -o username= <user_name> // <server_name> / <share_name> /mnt/ Password for <user_name> @// <server_name> / <share_name> : password",
"mount -t cifs -o username= DOMAIN \\Administrator,seal,vers=3.0 //server/example /mnt/ Password for DOMAIN \\Administrator@//server_name/share_name: password",
"ls -l /mnt/ total 4 drwxr-xr-x. 2 root root 8748 Dec 4 16:27 test.txt drwxr-xr-x. 17 root root 4096 Dec 4 07:43 Demo-Directory",
"// <server_name> / <share_name> /mnt cifs credentials= /root/smb.cred 0 0",
"mount /mnt/",
"username= user_name password= password domain= domain_name",
"chown user_name /root/smb.cred chmod 600 /root/smb.cred",
"//server_name/share_name /mnt cifs multiuser,sec=ntlmssp ,credentials= /root/smb.cred 0 0",
"mount /mnt/",
"mount //server_name/share_name on /mnt type cifs (sec=ntlmssp, multiuser ,...)",
"cifscreds add -u SMB_user_name server_name Password: password"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/mounting-an-smb-share-on-red-hat-enterprise-linux_managing-file-systems |
E.2. Setting Up Encrypted Communication between the Manager and an LDAP Server | E.2. Setting Up Encrypted Communication between the Manager and an LDAP Server To set up encrypted communication between the Red Hat Virtualization Manager and an LDAP server, obtain the root CA certificate of the LDAP server, copy the root CA certificate to the Manager, and create a PEM-encoded CA certificate. The keystore type can be any Java-supported type. The following procedure uses the Java KeyStore (JKS) format. Note For more information on creating a PEM-encoded CA certificate and importing certificates, see the X.509 CERTIFICATE TRUST STORE section of the README file at /usr/share/doc/ovirt-engine-extension-aaa-ldap-< version >. Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . Procedure On the Red Hat Virtualization Manager, copy the root CA certificate of the LDAP server to the /tmp directory and import the root CA certificate using keytool to create a PEM-encoded CA certificate. The following command imports the root CA certificate at /tmp/myrootca.pem and creates a PEM-encoded CA certificate myrootca.jks under /etc/ovirt-engine/aaa/ . Note down the certificate's location and password. If you are using the interactive setup tool, this is all the information you need. If you are configuring the LDAP server manually, follow the rest of the procedure to update the configuration files. USD keytool -importcert -noprompt -trustcacerts -alias myrootca -file /tmp/myrootca.pem -keystore /etc/ovirt-engine/aaa/myrootca.jks -storepass password Update the /etc/ovirt-engine/aaa/profile1.properties file with the certificate information: Note USD{local:_basedir} is the directory where the LDAP property configuration file resides and points to the /etc/ovirt-engine/aaa directory. If you created the PEM-encoded CA certificate in a different directory, replace USD{local:_basedir} with the full path to the certificate. To use startTLS (recommended): # Create keystore, import certificate chain and uncomment pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password To use SSL: # Create keystore, import certificate chain and uncomment pool.default.serverset.single.port = 636 pool.default.ssl.enable = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password To continue configuring an external LDAP provider, see Configuring an External LDAP Provider . To continue configuring LDAP and Kerberos for Single Sign-on, see Configuring LDAP and Kerberos for Single Sign-on . | [
"keytool -importcert -noprompt -trustcacerts -alias myrootca -file /tmp/myrootca.pem -keystore /etc/ovirt-engine/aaa/myrootca.jks -storepass password",
"Create keystore, import certificate chain and uncomment pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password",
"Create keystore, import certificate chain and uncomment pool.default.serverset.single.port = 636 pool.default.ssl.enable = true pool.default.ssl.truststore.file = USD{local:_basedir}/ myrootca.jks pool.default.ssl.truststore.password = password"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/Setting_Up_SSL_or_TLS_Connections_between_the_Manager_and_an_LDAP_Server |
Gateways | Gateways Red Hat OpenShift Service Mesh 3.0.0tp1 Gateways and OpenShift Service Mesh Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/gateways/index |
Getting Started with Red Hat Trusted Application Pipeline | Getting Started with Red Hat Trusted Application Pipeline Red Hat Trusted Application Pipeline 1.4 Learn how to get started with Red Hat Trusted Application Pipeline. Red Hat Trusted Application Pipeline Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/getting_started_with_red_hat_trusted_application_pipeline/index |
Chapter 6. Deployments | Chapter 6. Deployments 6.1. Custom domains for applications Warning Starting with Red Hat OpenShift Service on AWS 4.14, the Custom Domain Operator is deprecated. To manage Ingress in Red Hat OpenShift Service on AWS 4.14, use the Ingress Operator. The functionality is unchanged for Red Hat OpenShift Service on AWS 4.13 and earlier versions. You can configure a custom domain for your applications. Custom domains are specific wildcard domains that can be used with Red Hat OpenShift Service on AWS applications. 6.1.1. Configuring custom domains for applications The top-level domains (TLDs) are owned by the customer that is operating the Red Hat OpenShift Service on AWS cluster. The Custom Domains Operator sets up a new ingress controller with a custom certificate as a second day operation. The public DNS record for this ingress controller can then be used by an external DNS to create a wildcard CNAME record for use with a custom domain. Note Custom API domains are not supported because Red Hat controls the API domain. However, customers can change their application domains. For private custom domains with a private IngressController , set .spec.scope to Internal in the CustomDomain CR. Prerequisites A user account with dedicated-admin privileges A unique domain or wildcard domain, such as *.apps.<company_name>.io A custom certificate or wildcard custom certificate, such as CN=*.apps.<company_name>.io Access to a cluster with the latest version of the oc CLI installed Important Do not use the reserved names default or apps* , such as apps or apps2 , in the metadata/name: section of the CustomDomain CR. Procedure Create a new TLS secret from a private key and a public certificate, where fullchain.pem and privkey.pem are your public or private wildcard certificates. Example USD oc create secret tls <name>-tls --cert=fullchain.pem --key=privkey.pem -n <my_project> Create a new CustomDomain custom resource (CR): Example <company_name>-custom-domain.yaml apiVersion: managed.openshift.io/v1alpha1 kind: CustomDomain metadata: name: <company_name> spec: domain: apps.<company_name>.io 1 scope: External loadBalancerType: Classic 2 certificate: name: <name>-tls 3 namespace: <my_project> routeSelector: 4 matchLabels: route: acme namespaceSelector: 5 matchLabels: type: sharded 1 The custom domain. 2 The type of load balancer for your custom domain. This type can be the default classic or NLB if you use a network load balancer. 3 The secret created in the step. 4 Optional: Filters the set of routes serviced by the CustomDomain ingress. If no value is provided, the default is no filtering. 5 Optional: Filters the set of namespaces serviced by the CustomDomain ingress. If no value is provided, the default is no filtering. Apply the CR: Example USD oc apply -f <company_name>-custom-domain.yaml Get the status of your newly created CR: USD oc get customdomains Example output NAME ENDPOINT DOMAIN STATUS <company_name> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com *.apps.<company_name>.io Ready Using the endpoint value, add a new wildcard CNAME recordset to your managed DNS provider, such as Route53. Example *.apps.<company_name>.io -> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com Create a new application and expose it: Example USD oc new-app --docker-image=docker.io/openshift/hello-openshift -n my-project USD oc create route <route_name> --service=hello-openshift hello-openshift-tls --hostname hello-openshift-tls-my-project.apps.<company_name>.io -n my-project USD oc get route -n my-project USD curl https://hello-openshift-tls-my-project.apps.<company_name>.io Hello OpenShift! Troubleshooting Error creating TLS secret Troubleshooting: CustomDomain in NotReady state 6.1.2. Renewing a certificate for custom domains You can renew certificates with the Custom Domains Operator (CDO) by using the oc CLI tool. Prerequisites You have the latest version oc CLI tool installed. Procedure Create new secret USD oc create secret tls <secret-new> --cert=fullchain.pem --key=privkey.pem -n <my_project> Patch CustomDomain CR USD oc patch customdomain <company_name> --type='merge' -p '{"spec":{"certificate":{"name":"<secret-new>"}}}' Delete old secret USD oc delete secret <secret-old> -n <my_project> Troubleshooting Error creating TLS secret 6.2. Understanding deployments The Deployment and DeploymentConfig API objects in Red Hat OpenShift Service on AWS provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects: A Deployment or DeploymentConfig object, either of which describes the desired state of a particular component of the application as a pod template. Deployment objects involve one or more replica sets , which contain a point-in-time record of the state of a deployment as a pod template. Similarly, DeploymentConfig objects involve one or more replication controllers , which preceded replica sets. One or more pods, which represent an instance of a particular version of an application. Use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. Important As of Red Hat OpenShift Service on AWS 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 6.2.1. Building blocks of a deployment Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController , respectively, as their building blocks. Users do not have to manipulate replica sets, replication controllers, or pods owned by Deployment or DeploymentConfig objects. The deployment systems ensure changes are propagated appropriately. Tip If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy. The following sections provide further details on these objects. 6.2.1.1. Replica sets A ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. Note Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create. The following is an example ReplicaSet definition: apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. 2 Equality-based selector to specify resources with labels that match the selector. 3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend . 6.2.1.2. Replication controllers Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements. A replication controller configuration consists of: The number of replicas desired, which can be adjusted at run time. A Pod definition to use when creating a replicated pod. A selector for identifying managed pods. A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed. The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler. Note Use a DeploymentConfig to create a replication controller instead of creating replication controllers directly. If you require custom orchestration or do not require updates, use replica sets instead of replication controllers. The following is an example definition of a replication controller: apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod the controller creates. 4 Labels on the pod should include those from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6.2.2. Deployments Kubernetes provides a first-class, native API object type in Red Hat OpenShift Service on AWS called Deployment . Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles. For example, the following deployment definition creates a replica set to bring up one hello-openshift pod: Deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80 6.2.3. DeploymentConfig objects Important As of Red Hat OpenShift Service on AWS 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. Building on replication controllers, Red Hat OpenShift Service on AWS adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods. However, Red Hat OpenShift Service on AWS deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller. The DeploymentConfig deployment system provides the following capabilities: A DeploymentConfig object, which is a template for running applications. Triggers that drive automated deployments in response to events. User-customizable deployment strategies to transition from the version to the new version. A strategy runs inside a pod commonly referred as the deployment process. A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment. Versioning of your application to support rollbacks either manually or automatically in case of deployment failure. Manual replication scaling and autoscaling. When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object's pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one. Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally. The Red Hat OpenShift Service on AWS DeploymentConfig object defines the following details: The elements of a ReplicationController definition. Triggers for creating a new deployment automatically. The strategy for transitioning between deployments. Lifecycle hooks. Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the replication controller is retained to enable easy rollback if needed. Example DeploymentConfig definition apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3 1 A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration. 2 An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream. 3 The default Rolling strategy makes a downtime-free transition between deployments. 6.2.4. Comparing Deployment and DeploymentConfig objects Both Kubernetes Deployment objects and Red Hat OpenShift Service on AWS-provided DeploymentConfig objects are supported in Red Hat OpenShift Service on AWS; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. The following sections go into more detail on the differences between the two object types to further help you decide which type to use. Important As of Red Hat OpenShift Service on AWS 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 6.2.4.1. Design One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod. However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs. 6.2.4.2. Deployment-specific features Rollover The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects that use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one. DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for Deployment objects. Proportional scaling Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set. DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller. Pausing mid-rollout Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes. 6.2.4.3. DeploymentConfig object-specific features Automatic rollbacks Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. Triggers Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment: USD oc rollout pause deployments/<name> Lifecycle hooks Deployments do not yet support any lifecycle hooks. Custom strategies Deployments do not support user-specified custom deployment strategies. 6.3. Managing deployment processes 6.3.1. Managing DeploymentConfig objects Important As of Red Hat OpenShift Service on AWS 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. DeploymentConfig objects can be managed from the Red Hat OpenShift Service on AWS web console's Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated. 6.3.1.1. Starting a deployment You can start a rollout to begin the deployment process of your application. Procedure To start a new deployment process from an existing DeploymentConfig object, run the following command: USD oc rollout latest dc/<name> Note If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed. 6.3.1.2. Viewing a deployment You can view a deployment to get basic information about all the available revisions of your application. Procedure To show details about all recently created replication controllers for the provided DeploymentConfig object, including any currently running deployment process, run the following command: USD oc rollout history dc/<name> To view details specific to a revision, add the --revision flag: USD oc rollout history dc/<name> --revision=1 For more detailed information about a DeploymentConfig object and its latest revision, use the oc describe command: USD oc describe dc <name> 6.3.1.3. Retrying a deployment If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process. Procedure To restart a failed deployment process: USD oc rollout retry dc/<name> If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried. Note Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed. 6.3.1.4. Rolling back a deployment Rollbacks revert an application back to a revision and can be performed using the REST API, the CLI, or the web console. Procedure To rollback to the last successful deployed revision of your configuration: USD oc rollout undo dc/<name> The DeploymentConfig object's template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with --to-revision , then the last successfully deployed revision is used. Image change triggers on the DeploymentConfig object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers: USD oc set triggers dc/<name> --auto Note Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. 6.3.1.5. Executing commands inside a container You can add a command to a container, which modifies the container's startup behavior by overruling the image's ENTRYPOINT . This is different from a lifecycle hook, which instead can be run once per deployment at a specified time. Procedure Add the command parameters to the spec field of the DeploymentConfig object. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist). kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar # ... 6.3.1.6. Viewing deployment logs Procedure To stream the logs of the latest revision for a given DeploymentConfig object: USD oc logs -f dc/<name> If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application. You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually: USD oc logs --version=1 dc/<name> 6.3.1.7. Deployment triggers A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster. Warning If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually. Config change deployment triggers The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object. Note If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused. Config change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ConfigChange" Image change deployment triggers The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed). Image change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ImageChange" imageChangeParams: automatic: true 1 from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" namespace: "myproject" containerNames: - "helloworld" 1 If the imageChangeParams.automatic field is set to false , the trigger is disabled. With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object's helloworld container, a new replication controller is created using the new image for the helloworld container. Note If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false , or with automatic=true ) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag. 6.3.1.7.1. Setting deployment triggers Procedure You can set deployment triggers for a DeploymentConfig object using the oc set triggers command. For example, to set a image change trigger, use the following command: USD oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name> 6.3.1.8. Setting deployment resources A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits. Note The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies. Procedure In the following example, each of resources , cpu , memory , and ephemeral-storage is optional: kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). 3 ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process. To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota. 6.3.1.9. Scaling manually In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them. Note Pods can also be auto-scaled using the oc autoscale command. Procedure To manually scale a DeploymentConfig object, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig object to 3 . USD oc scale dc frontend --replicas=3 The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig object frontend . 6.3.1.10. Accessing private repositories from DeploymentConfig objects You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the Red Hat OpenShift Service on AWS web console method. Procedure Create a new project. Navigate to Workloads Secrets . Create a secret that contains credentials for accessing a private image repository. Navigate to Workloads DeploymentConfigs . Create a DeploymentConfig object. On the DeploymentConfig object editor page, set the Pull Secret and save your changes. 6.3.1.11. Running a pod with a different service account You can run a pod with a service account other than the default. Procedure Edit the DeploymentConfig object: USD oc edit dc/<deployment_config> Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc # ... spec: # ... securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account> 6.4. Using deployment strategies Deployment strategies are used to change or upgrade applications without downtime so that users barely notice a change. Because users generally access applications through a route handled by a router, deployment strategies can focus on DeploymentConfig object features or routing features. Strategies that focus on DeploymentConfig object features impact all routes that use the application. Strategies that use router features target individual routes. Most deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. 6.4.1. Choosing a deployment strategy Consider the following when choosing a deployment strategy: Long-running connections must be handled gracefully. Database conversions can be complex and must be done and rolled back along with the application. If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. You must have the infrastructure to do this. If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m , a value set in TimeoutSeconds in dc.spec.strategy.*params . 6.4.2. Rolling strategy A rolling deployment slowly replaces instances of the version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a rolling deployment: When you want to take no downtime during an application update. When your application supports having old code and new code running at the same time. A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility. Example rolling strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {} 1 The time to wait between individual pod updates. If unspecified, this value defaults to 1 . 2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1 . 3 The time to wait for a scaling event before giving up. Optional; the default is 600 . Here, giving up means automatically rolling back to the complete deployment. 4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure. 5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure. 6 pre and post are both lifecycle hooks. The rolling strategy: Executes any pre lifecycle hook. Scales up the new replication controller based on the surge count. Scales down the old replication controller based on the max unavailable count. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero. Executes any post lifecycle hook. Important When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10% ) or an absolute value (e.g., 2 ). The default value for both is 25% . These parameters allow the deployment to be tuned for availability and speed. For example: maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up. maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update). maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss. Generally, if you want fast rollouts, use maxSurge . If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable . Warning The default setting for maxUnavailable is 1 for all the machine config pools in Red Hat OpenShift Service on AWS. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 6.4.2.1. Canary deployments All rolling deployments in Red Hat OpenShift Service on AWS are canary deployments ; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy. 6.4.2.2. Creating a rolling deployment Rolling deployments are the default type in Red Hat OpenShift Service on AWS. You can create a rolling deployment using the CLI. Procedure Create an application based on the example deployment images found in Quay.io : USD oc new-app quay.io/openshifttest/deployment-example:latest Note This image does not expose any ports. If you want to expose your applications over an external LoadBalancer service or enable access to the application over the public internet, create a service by using the oc expose dc/deployment-example --port=<port> command after completing this procedure. If you have the router installed, make the application available via a route or use the service IP directly. USD oc expose svc/deployment-example Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image. Scale the DeploymentConfig object up to three replicas: USD oc scale dc/deployment-example --replicas=3 Trigger a new deployment automatically by tagging a new version of the example as the latest tag: USD oc tag deployment-example:v2 deployment-example:latest In your browser, refresh the page until you see the v2 image. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: USD oc describe dc deployment-example During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues. If the pods do not become ready, the process aborts, and the deployment rolls back to its version. 6.4.2.3. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 6.4.2.4. Starting a rolling deployment using the Developer perspective You can upgrade an application by starting a rolling deployment. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure In the Topology view, click the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 6.1. Rolling update Additional resources Creating and deploying applications on Red Hat OpenShift Service on AWS using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 6.4.3. Recreate strategy The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process. Example recreate strategy definition kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {} 1 recreateParams are optional. 2 pre , mid , and post are lifecycle hooks. The recreate strategy: Executes any pre lifecycle hook. Scales down the deployment to zero. Executes any mid lifecycle hook. Scales up the new deployment. Executes any post lifecycle hook. Important During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. When to use a recreate deployment: When you must run migrations or other data transformations before your new code starts. When you do not support having new and old versions of your application code running at the same time. When you want to use a RWO volume, which is not supported being shared between multiple replicas. A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. 6.4.3.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 6.4.3.2. Starting a recreate deployment using the Developer perspective You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console. Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To switch to a recreate update strategy and to upgrade an application: Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application. In the YAML editor, change the spec.strategy.type to Recreate and click Save . In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate . Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. Figure 6.2. Recreate update Additional resources Creating and deploying applications on Red Hat OpenShift Service on AWS using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 6.4.4. Custom strategy The custom strategy allows you to provide your own deployment behavior. Example custom strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1 In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image's Dockerfile . The optional environment variables provided are added to the execution environment of the strategy process. Additionally, Red Hat OpenShift Service on AWS provides the following environment variables to the deployment process: Environment variable Description OPENSHIFT_DEPLOYMENT_NAME The name of the new deployment, a replication controller. OPENSHIFT_DEPLOYMENT_NAMESPACE The name space of the new deployment. The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default Red Hat OpenShift Service on AWS deployer image is used instead: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete This results in following deployment: Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete If the custom deployment strategy process requires access to the Red Hat OpenShift Service on AWS API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication. 6.4.4.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 6.4.5. Lifecycle hooks The rolling and recreate strategies support lifecycle hooks , or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy: Example pre lifecycle hook pre: failurePolicy: Abort execNewPod: {} 1 1 execNewPod is a pod-based lifecycle hook. Every hook has a failure policy , which defines the action the strategy should take when a hook failure is encountered: Abort The deployment process will be considered a failure if the hook fails. Retry The hook execution should be retried until it succeeds. Ignore Any hook failure should be ignored and the deployment should proceed. Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field. Pod-based lifecycle hook Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig object. The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4 1 The helloworld name refers to spec.template.spec.containers[0].name . 2 This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image. 3 env is an optional set of environment variables for the hook container. 4 volumes is an optional set of volume references for the hook container. In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties: The hook command is /usr/bin/command arg1 arg2 . The hook container has the CUSTOM_VAR1=custom_value1 environment variable. The hook failure policy is Abort , meaning the deployment process fails if the hook fails. The hook pod inherits the data volume from the DeploymentConfig object pod. 6.4.5.1. Setting lifecycle hooks You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI. Procedure Use the oc set deployment-hook command to set the type of hook you want: --pre , --mid , or --post . For example, to set a pre-deployment hook: USD oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2 6.5. Using route-based deployment strategies Deployment strategies provide a way for the application to evolve. Some strategies use Deployment objects to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with Deployment objects to impact specific routes. The most common route-based strategy is to use a blue-green deployment . The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version. Alternatively, you can use an A/B versions strategy in which both versions are active at the same time. With this strategy, some users can use version A , and other users can use version B . You can use this strategy to experiment with user interface changes or other features in order to get user feedback. You can also use it to verify proper operation in a production context where problems impact a limited number of users. A canary deployment tests the new version but when a problem is detected it quickly falls back to the version. This can be done with both of the above strategies. The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled. 6.5.1. Proxy shards and traffic splitting In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard , which forwards or splits the traffic it receives to a separate service or application running elsewhere. In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes. Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the Red Hat OpenShift Service on AWS router with proportional balancing capabilities. 6.5.2. N-1 compatibility Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem. This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user's browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it. For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional. One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment. 6.5.3. Graceful termination Red Hat OpenShift Service on AWS and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit. On shutdown, Red Hat OpenShift Service on AWS sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM , stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed, or gracefully terminate individual connections at the opportunity, before exiting. After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and can be customized per application as necessary. 6.5.4. Blue-green deployments Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route. Because many applications depend on persistent data, you must have an application that supports N-1 compatibility , which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer. Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version. 6.5.4.1. Setting up a blue-green deployment Blue-green deployments use two Deployment objects. Both are running, and the one in production depends on the service the route specifies, with each Deployment object exposed to a different service. Note Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live. If necessary, you can roll back to the older (blue) version by switching the service back to the version. Procedure Create two independent application components. Create a copy of the example application running the v1 image under the example-blue service: USD oc new-app openshift/deployment-example:v1 --name=example-blue Create a second copy that uses the v2 image under the example-green service: USD oc new-app openshift/deployment-example:v2 --name=example-green Create a route that points to the old service: USD oc expose svc/example-blue --name=bluegreen-example Browse to the application at bluegreen-example-<project>.<router_domain> to verify you see the v1 image. Edit the route and change the service name to example-green : USD oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}' To verify that the route has changed, refresh the browser until you see the v2 image. 6.5.5. A/B deployments The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user's reaction to the different versions to inform design decisions. For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together. Red Hat OpenShift Service on AWS supports N-1 compatibility through the web console as well as the CLI. 6.5.5.1. Load balancing for A/B testing The user sets up a route with multiple services. Each service handles a version of the application. Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights . The weight for each service is distributed to the service's endpoints so that the sum of the endpoint weights is the service weight . The route can have up to four services. The weight for the service can be between 0 and 256 . When the weight is 0 , the service does not participate in load balancing but continues to serve existing persistent connections. When the service weight is not 0 , each endpoint has a minimum weight of 1 . Because of this, a service with a lot of endpoints can end up with higher weight than intended. In this case, reduce the number of pods to get the expected load balance weight . Procedure To set up the A/B environment: Create the two applications and give them different names. Each creates a Deployment object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version. Create the first application. The following example creates an application called ab-example-a : USD oc new-app openshift/deployment-example --name=ab-example-a Create the second application: USD oc new-app openshift/deployment-example:v2 --name=ab-example-b Both applications are deployed and services are created. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version. USD oc expose svc/ab-example-a Browse to the application at ab-example-a.<project>.<router_domain> to verify that you see the expected version. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route. Note When using alternateBackends , also use the roundrobin load balancing strategy to ensure requests are distributed as expected to the services based on weight. roundrobin can be set for a route by using a route annotation. See the Additional resources section for more information about route annotations. Setting the oc set route-backend to 0 means the service does not participate in load balancing, but continues to serve existing persistent connections. Note Changes to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads. To edit the route, run: USD oc edit route <route_name> Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin # ... spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 # ... 6.5.5.1.1. Managing weights of an existing route using the web console Procedure Navigate to the Networking Routes page. Click the Options menu to the route you want to edit and select Edit Route . Edit the YAML file. Update the weight to be an integer between 0 and 256 that specifies the relative weight of the target against other target reference objects. The value 0 suppresses requests to this back end. The default is 100 . Run oc explain routes.spec.alternateBackends for more information about the options. Click Save . 6.5.5.1.2. Managing weights of an new route using the web console Navigate to the Networking Routes page. Click Create Route . Enter the route Name . Select the Service . Click Add Alternate Service . Enter a value for Weight and Alternate Service Weight . Enter a number between 0 and 255 that depicts relative weight compared with other targets. The default is 100 . Select the Target Port . Click Create . 6.5.5.1.3. Managing weights using the CLI Procedure To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command: USD oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options] For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2 : USD oc set route-backends ab-example ab-example-a=198 ab-example-b=2 This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b . This command does not scale the deployment. You might be required to do so to have enough pods to handle the request load. Run the command with no flags to verify the current configuration: USD oc set route-backends ab-example Example output NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%) To override the default values for the load balancing algorithm, adjust the annotation on the route by setting the algorithm to roundrobin . For a route on Red Hat OpenShift Service on AWS, the default load balancing algorithm is set to random or source values. To set the algorithm to roundrobin , run the command: USD oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin For Transport Layer Security (TLS) passthrough routes, the default value is source . For all other routes, the default is random . To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed. The following example alters the weight of ab-example-a and ab-example-b services: USD oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 Alternatively, alter the weight of a service by specifying a percentage: USD oc set route-backends ab-example --adjust ab-example-b=5% By specifying + before the percentage declaration, you can adjust a weighting relative to the current setting. For example: USD oc set route-backends ab-example --adjust ab-example-b=+15% The --equal flag sets the weight of all services to 100 : USD oc set route-backends ab-example --equal The --zero flag sets the weight of all services to 0 . All requests then return with a 503 error. Note Not all routers may support multiple or weighted backends. 6.5.5.1.4. One service, multiple Deployment objects Procedure Create a new application, adding a label ab-example=true that will be common to all shards: USD oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA USD oc delete svc/ab-example-a The application is deployed and a service is created. This is the first shard. Make the application available via a route, or use the service IP directly: USD oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true USD oc expose service ab-example Browse to the application at ab-example-<project_name>.<router_domain> to verify you see the v1 image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables: USD oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true USD oc delete svc/ab-example-b At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. To force your browser to one or the other shard: Use the oc scale command to reduce replicas of ab-example-a to 0 . USD oc scale dc/ab-example-a --replicas=0 Refresh your browser to show v2 and shard B (in red). Scale ab-example-a to 1 replica and ab-example-b to 0 : USD oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 Refresh your browser to show v1 and shard A (in blue). If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either Deployment object: USD oc edit dc/ab-example-a or USD oc edit dc/ab-example-b 6.5.6. Additional resources Route-specific annotations . | [
"oc create secret tls <name>-tls --cert=fullchain.pem --key=privkey.pem -n <my_project>",
"apiVersion: managed.openshift.io/v1alpha1 kind: CustomDomain metadata: name: <company_name> spec: domain: apps.<company_name>.io 1 scope: External loadBalancerType: Classic 2 certificate: name: <name>-tls 3 namespace: <my_project> routeSelector: 4 matchLabels: route: acme namespaceSelector: 5 matchLabels: type: sharded",
"oc apply -f <company_name>-custom-domain.yaml",
"oc get customdomains",
"NAME ENDPOINT DOMAIN STATUS <company_name> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com *.apps.<company_name>.io Ready",
"*.apps.<company_name>.io -> xxrywp.<company_name>.cluster-01.opln.s1.openshiftapps.com",
"oc new-app --docker-image=docker.io/openshift/hello-openshift -n my-project",
"oc create route <route_name> --service=hello-openshift hello-openshift-tls --hostname hello-openshift-tls-my-project.apps.<company_name>.io -n my-project",
"oc get route -n my-project",
"curl https://hello-openshift-tls-my-project.apps.<company_name>.io Hello OpenShift!",
"oc create secret tls <secret-new> --cert=fullchain.pem --key=privkey.pem -n <my_project>",
"oc patch customdomain <company_name> --type='merge' -p '{\"spec\":{\"certificate\":{\"name\":\"<secret-new>\"}}}'",
"oc delete secret <secret-old> -n <my_project>",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"oc edit dc/<deployment_config>",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA",
"oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true",
"oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true",
"oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/building_applications/deployments |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_smart_card_authentication/proc_providing-feedback-on-red-hat-documentation_managing-smart-card-authentication |
9.9. Set the Root Password | 9.9. Set the Root Password Setting up a root account and password is one of the most important steps during your installation. The root account is used to install packages, upgrade RPMs, and perform most system maintenance. Logging in as root gives you complete control over your system. Note The root user (also known as the superuser) has complete access to the entire system; for this reason, logging in as the root user is best done only to perform system maintenance or administration. Figure 9.32. Root Password Use the root account only for system administration. Create a non-root account for your general use and use the su command to change to root only when you need to perform tasks that require superuser authorization. These basic rules minimize the chances of a typo or an incorrect command doing damage to your system. Note To become root, type su - at the shell prompt in a terminal window and then press Enter . Then, enter the root password and press Enter . The installation program prompts you to set a root password [2] for your system. . You cannot proceed to the stage of the installation process without entering a root password. The root password must be at least six characters long; the password you type is not echoed to the screen. You must enter the password twice; if the two passwords do not match, the installation program asks you to enter them again. You should make the root password something you can remember, but not something that is easy for someone else to guess. Your name, your phone number, qwerty , password, root , 123456 , and anteater are all examples of bad passwords. Good passwords mix numerals with upper and lower case letters and do not contain dictionary words: Aard387vark or 420BMttNT , for example. Remember that the password is case-sensitive. If you write down your password, keep it in a secure place. However, it is recommended that you do not write down this or any password you create. Warning Do not use one of the example passwords offered in this manual. Using one of these passwords could be considered a security risk. To change your root password after you have completed the installation, run the passwd command as root . If you forget the root password, see Resolving Problems in System Recovery Modes in the Red Hat Enterprise Linux 6 Deployment Guide for instructions on how to set a new one. [2] A root password is the administrative password for your Red Hat Enterprise Linux system. You should only log in as root when needed for system maintenance. The root account does not operate within the restrictions placed on normal user accounts, so changes made as root can have implications for your entire system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-account_configuration-x86 |
Appendix B. Importing Kickstart Repositories | Appendix B. Importing Kickstart Repositories Kickstart repositories are not provided by the Content ISO image. To use Kickstart repositories in your disconnected Satellite, you must download a binary DVD ISO file for the version of Red Hat Enterprise Linux that you want to use and copy the Kickstart files to Satellite. To import Kickstart repositories for Red Hat Enterprise Linux 7, complete Section B.1, "Importing Kickstart Repositories for Red Hat Enterprise Linux 7" . To import Kickstart repositories for Red Hat Enterprise Linux 8, complete Section B.2, "Importing Kickstart Repositories for Red Hat Enterprise Linux 8" . B.1. Importing Kickstart Repositories for Red Hat Enterprise Linux 7 To import Kickstart repositories for Red Hat Enterprise Linux 7, complete the following steps on Satellite. Procedure Navigate to the Red Hat Customer Portal at access.redhat.com and log in. In the upper left of the window, click Downloads . To the right of Red Hat Enterprise Linux 7 , click Versions 7 and below . From the Version list, select the required version of the Red Hat Enterprise Linux 7, for example 7.7. In the Download Red Hat Enterprise Linux window, locate the binary DVD version of the ISO image, for example, Red Hat Enterprise Linux 7.7 Binary DVD , and click Download Now . When the download completes, copy the ISO image to Satellite Server. On Satellite Server, create a mount point and temporarily mount the ISO image at that location: Create Kickstart directories: Copy the kickstart files from the ISO image: Add the following entries to the listing files: To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/listing file, append the version number with a new line. For example, for the RHEL 7.7 ISO, append 7.7 . To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/listing file, append the architecture with a new line. For example, x86_64 . To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/listing file, append kickstart with a new line. Copy the .treeinfo files from the ISO image: If you do not plan to use the mounted binary DVD ISO image, unmount and remove the directory: In the Satellite web UI, enable the Kickstart repositories. B.2. Importing Kickstart Repositories for Red Hat Enterprise Linux 8 To import Kickstart repositories for Red Hat Enterprise Linux 8, complete the following steps on Satellite. Procedure Navigate to the Red Hat Customer Portal at access.redhat.com and log in. In the upper left of the window, click Downloads . Click Red Hat Enterprise Linux 8 . In the Download Red Hat Enterprise Linux window, locate the binary DVD version of the ISO image, for example, Red Hat Enterprise Linux 8.1 Binary DVD , and click Download Now . When the download completes, copy the ISO image to Satellite Server. On Satellite Server, create a mount point and temporarily mount the ISO image at that location: Create directories for Red Hat Enterprise Linux 8 AppStream and BaseOS Kickstart repositories: Copy the kickstart files from the ISO image: Note that for BaseOS, you must also copy the contents of the /mnt/ iso /images/ directory. Add the following entries to the listing files: To the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/listing file, append kickstart with a new line. To the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/listing file, append kickstart with a new line: To the /var/www/html/pub/sat-import/content/dist/rhel8/listing file, append the version number with a new line. For example, for the RHEL 8.1 binary ISO, append 8.1 . Copy the .treeinfo files from the ISO image: Open the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo file for editing. In the [general] section, make the following changes: Change packagedir = AppStream/Packages to packagedir = Packages Change repository = AppStream to repository = . Change variant = AppStream to variant = BaseOS Change variants = AppStream,BaseOS to variants = BaseOS In the [tree] section, change variants = AppStream,BaseOS to variants = BaseOS . In the [variant-BaseOS] section, make the following changes: Change packages = BaseOS/Packages to packages = Packages Change repository = BaseOS to repository = . Delete the [media] and [variant-AppStream] sections. Save and close the file. Verify that the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo file has the following format: Open the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo file for editing. In the [general] section, make the following changes: Change packagedir = AppStream/Packages to packagedir = Packages Change repository = AppStream to repository = . Change variants = AppStream,BaseOS to variants = AppStream In the [tree] section, change variants = AppStream,BaseOS to variants = AppStream In the [variant-AppStream] section, make the following changes: Change packages = AppStream/Packages to packages = Packages Change repository = AppStream to repository = . Delete the following sections from the file: [checksums] , [images-x86_64] , [images-xen] , [media] , [stage2] , [variant-BaseOS] . Save and close the file. Verify that the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo file has the following format: If you do not plan to use the mounted binary DVD ISO image, unmount and remove the directory: In the Satellite web UI, enable the Kickstart repositories. | [
"mkdir /mnt/ iso mount -o loop rhel-binary-dvd.iso /mnt/ iso",
"mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/",
"cp -a /mnt/ iso /* /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/",
"cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/treeinfo",
"umount /mnt/ iso rmdir /mnt/ iso",
"mkdir /mnt/ iso mount -o loop rhel-binary-dvd.iso /mnt/ iso",
"mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart",
"cp -a /mnt/ iso /AppStream/* /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart cp -a /mnt/ iso /BaseOS/* /mnt/ iso /images/ /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart",
"cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo",
"[checksums] images/efiboot.img = sha256:9ad9beee4c906cd05d227a1be7a499c8d2f20b3891c79831325844c845262bb6 images/install.img = sha256:e246bf4aedfff3bb54ae9012f959597cdab7387aadb3a504f841bdc2c35fe75e images/pxeboot/initrd.img = sha256:a66e3c158f02840b19c372136a522177a2ab4bd91cb7269fb5bfdaaf7452efef images/pxeboot/vmlinuz = sha256:789028335b64ddad343f61f2abfdc9819ed8e9dfad4df43a2694c0a0ba780d16 [general] ; WARNING.0 = This section provides compatibility with pre-productmd treeinfos. ; WARNING.1 = Read productmd documentation for details about new format. arch = x86_64 family = Red Hat Enterprise Linux name = Red Hat Enterprise Linux 8.1.0 packagedir = Packages platforms = x86_64,xen repository = . timestamp = 1571146127 variant = BaseOS variants = BaseOS version = 8.1.0 [header] type = productmd.treeinfo version = 1.2 [images-x86_64] efiboot.img = images/efiboot.img initrd = images/pxeboot/initrd.img kernel = images/pxeboot/vmlinuz [images-xen] initrd = images/pxeboot/initrd.img kernel = images/pxeboot/vmlinuz [release] name = Red Hat Enterprise Linux short = RHEL version = 8.1.0 [stage2] mainimage = images/install.img [tree] arch = x86_64 build_timestamp = 1571146127 platforms = x86_64,xen variants = BaseOS [variant-BaseOS] id = BaseOS name = BaseOS packages = Packages repository = . type = variant uid = BaseOS",
"[general] ; WARNING.0 = This section provides compatibility with pre-productmd treeinfos. ; WARNING.1 = Read productmd documentation for details about new format. arch = x86_64 family = Red Hat Enterprise Linux name = Red Hat Enterprise Linux 8.1.0 packagedir = Packages platforms = x86_64,xen repository = . timestamp = 1571146127 variant = AppStream variants = AppStream version = 8.1.0 [header] type = productmd.treeinfo version = 1.2 [release] name = Red Hat Enterprise Linux short = RHEL version = 8.1.0 [tree] arch = x86_64 build_timestamp = 1571146127 platforms = x86_64,xen variants = AppStream [variant-AppStream] id = AppStream name = AppStream packages = Packages repository = . type = variant uid = AppStream",
"umount /mnt/ iso rmdir /mnt/ iso"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/Importing_Kickstart_Repositories_content-management |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The AMQ Python examples require a running message broker with a queue named examples . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named examples . USD <broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-05-07 10:16:25 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_python_client/using_the_broker_with_the_examples |
16.17.3. Create Software RAID | 16.17.3. Create Software RAID Redundant arrays of independent disks (RAIDs) are constructed from multiple storage devices that are arranged to provide increased performance and - in some configurations - greater fault tolerance. Refer to the Red Hat Enterprise Linux Storage Administration Guide for a description of different kinds of RAIDs. To make a RAID device, you must first create software RAID partitions. Once you have created two or more software RAID partitions, select RAID to join the software RAID partitions into a RAID device. RAID Partition Choose this option to configure a partition for software RAID. This option is the only choice available if your disk contains no software RAID partitions. This is the same dialog that appears when you add a standard partition - refer to Section 16.17.2, "Adding Partitions" for a description of the available options. Note, however, that File System Type must be set to software RAID Figure 16.41. Create a software RAID partition RAID Device Choose this option to construct a RAID device from two or more existing software RAID partitions. This option is available if two or more software RAID partitions have been configured. Figure 16.42. Create a RAID device Select the file system type as for a standard partition. Anaconda automatically suggests a name for the RAID device, but you can manually select names from md0 to md15 . Click the checkboxes beside individual storage devices to include or remove them from this RAID. The RAID Level corresponds to a particular type of RAID. Choose from the following options: RAID 0 - distributes data across multiple storage devices. Level 0 RAIDs offer increased performance over standard partitions, and can be used to pool the storage of multiple devices into one large virtual device. Note that Level 0 RAIDS offer no redundancy and that the failure of one device in the array destroys the entire array. RAID 0 requires at least two RAID partitions. RAID 1 - mirrors the data on one storage device onto one or more other storage devices. Additional devices in the array provide increasing levels of redundancy. RAID 1 requires at least two RAID partitions. RAID 4 - distributes data across multiple storage devices, but uses one device in the array to store parity information that safeguards the array in case any device within the array fails. Because all parity information is stored on the one device, access to this device creates a bottleneck in the performance of the array. RAID 4 requires at least three RAID partitions. RAID 5 - distributes data and parity information across multiple storage devices. Level 5 RAIDs therefore offer the performance advantages of distributing data across multiple devices, but do not share the performance bottleneck of level 4 RAIDs because the parity information is also distributed through the array. RAID 5 requires at least three RAID partitions. RAID 6 - level 6 RAIDs are similar to level 5 RAIDs, but instead of storing only one set of parity data, they store two sets. RAID 6 requires at least four RAID partitions. RAID 10 - level 10 RAIDs are nested RAIDs or hybrid RAIDs . Level 10 RAIDs are constructed by distributing data over mirrored sets of storage devices. For example, a level 10 RAID constructed from four RAID partitions consists of two pairs of partitions in which one partition mirrors the other. Data is then distributed across both pairs of storage devices, as in a level 0 RAID. RAID 10 requires at least four RAID partitions. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/create_software_raid-ppc |
4. Recommended References | 4. Recommended References For additional references about related topics, refer to the following table: Table 1. Recommended References Table Topic Reference Comment Shared Data Clustering and File Systems Shared Data Clusters by Dilip M. Ranade. Wiley, 2002. Provides detailed technical information on cluster file system and cluster volume-manager design. Storage Area Networks (SANs) Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs, Second Edition by Tom Clark. Addison-Wesley, 2003. Provides a concise summary of Fibre Channel and IP SAN Technology. Building SANs with Brocade Fabric Switches by C. Beauchamp, J. Judd, and B. Keo. Syngress, 2001. Best practices for building Fibre Channel SANs based on the Brocade family of switches, including core-edge topology for large SAN fabrics. Building Storage Networks, Second Edition by Marc Farley. Osborne/McGraw-Hill, 2001. Provides a comprehensive overview reference on storage networking technologies. Applications and High Availability Blueprints for High Availability: Designing Resilient Distributed Systems by E. Marcus and H. Stern. Wiley, 2000. Provides a summary of best practices in high availability. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-intro-references-gfs |
Administering Red Hat Satellite | Administering Red Hat Satellite Red Hat Satellite 6.16 Administer users and permissions, manage organizations and locations, back up and restore Satellite, maintain Satellite, and more Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/index |
Power Monitoring | Power Monitoring OpenShift Container Platform 4.16 Configuring and using power monitoring for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/power_monitoring/index |
Appendix A. About Nodeshift | Appendix A. About Nodeshift Nodeshift is a module for running OpenShift deployments with Node.js projects. Important Nodeshift assumes you have the oc CLI client installed, and you are logged into your OpenShift cluster. Nodeshift also uses the current project the oc CLI client is using. Nodeshift uses resource files in the .nodeshift folder located at the root of the project to handle creating OpenShift Routes, Services and DeploymentConfigs. More details on Nodeshift are available on the Nodeshift project page . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/node.js_runtime_guide/about-nodeshift |
Appendix B. Updating the deployment configuration of an example application | Appendix B. Updating the deployment configuration of an example application The deployment configuration for an example application contains information related to deploying and running the application in OpenShift, such as route information or readiness probe location. The deployment configuration of an example application is stored in a set of YAML files. For examples that use the OpenShift Maven plugin, the YAML files are located in the src/main/jkube/ directory. For examples using Nodeshift, the YAML files are located in the .nodeshift directory. Important The deployment configuration files used by the OpenShift Maven plugin and Nodeshift do not have to be full OpenShift resource definitions. Both OpenShift Maven plugin and Nodeshift can take the deployment configuration files and add some missing information to create a full OpenShift resource definition. The resource definitions generated by the OpenShift Maven plugin are available in the target/classes/META-INF/jkube/ directory. The resource definitions generated by Nodeshift are available in the tmp/nodeshift/resource/ directory. Prerequisites An existing example project. The oc CLI client installed. Procedure Edit an existing YAML file or create an additional YAML file with your configuration update. For example, if your example already has a YAML file with a readinessProbe configured, you could change the path value to a different available path to check for readiness: spec: template: spec: containers: readinessProbe: httpGet: path: /path/to/probe port: 8080 scheme: HTTP ... If a readinessProbe is not configured in an existing YAML file, you can also create a new YAML file in the same directory with the readinessProbe configuration. Deploy the updated version of your example using Maven or npm. Verify that your configuration updates show in the deployed version of your example. USD oc export all --as-template='my-template' apiVersion: template.openshift.io/v1 kind: Template metadata: creationTimestamp: null name: my-template objects: - apiVersion: template.openshift.io/v1 kind: DeploymentConfig ... spec: ... template: ... spec: containers: ... livenessProbe: failureThreshold: 3 httpGet: path: /path/to/different/probe port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 1 ... Additional resources If you updated the configuration of your application directly using the web-based console or the oc CLI client, export and add these changes to your YAML file. Use the oc export all command to show the configuration of your deployed application. | [
"spec: template: spec: containers: readinessProbe: httpGet: path: /path/to/probe port: 8080 scheme: HTTP",
"oc export all --as-template='my-template' apiVersion: template.openshift.io/v1 kind: Template metadata: creationTimestamp: null name: my-template objects: - apiVersion: template.openshift.io/v1 kind: DeploymentConfig spec: template: spec: containers: livenessProbe: failureThreshold: 3 httpGet: path: /path/to/different/probe port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 1"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_runtime_guide/updating-the-deployment-configuration-of-an-example-application_vertx |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.