Similarly to Chapters 1 and 2, we'll simulate the flooding process in reverse, processing events for adjacent platforms becoming reachable while maintaining a disjoint-set data structure of mutually-reachable groups of platforms. The big difference is that sample collection events can no longer also be processed along the way — each of the \(K\) updated \(S\) values will need to be processed separately. Each can be thought of as a removal of an existing S value followed by an insertion of a new \(S\) value (at which point, the initial \(RC\) \(S\) values might as well be inserted in the same way). While simulating the flooding process, we'll construct a corresponding binary tree. Each initial group in the disjoint-set data structure will correspond to a leaf node, and whenever two groups get merged, their corresponding tree nodes will become the children of the resulting group's new tree node, with the tree's root being the node corresponding to the single final group consisting of all \(RC\) platforms. Each tree node will be relevant to a certain interval of heights (dictated by the heights of the events which caused the node to be created and to become another node's child), which will be stored as well. Now, for a given rock sample (to be collected at a certain platform while the water level is at a certain height), we can think of the tree node which "contains" that sample as the highest ancestor of that platform's corresponding leaf node whose relevant height interval includes that height. As we go, we'll maintain information about which tree nodes currently contain collectable samples (of which there are at most \(RC\) at any point in time). In this representation, each robot is capable of collecting all samples contained in nodes along a single path from the root to a leaf node. This makes it reasonable to determine the minimum number of robots \(y\) required to collect all current samples, and to update this quantity as samples get inserted or removed. When a sample is about to get inserted at node \(i\), it can either cause \(y\) to increase by \(1\) or remain unchanged. If there are any other samples in node \(i\)'s subtree, then \(y\) must remain unchanged, as a robot handling such a sample could also handle the new sample along the way. Otherwise, let \(a\) be node \(i\)'s closest ancestor containing any samples (if any). If there's no such ancestor \(a\), then \(y\) must increase by \(1\), as there are no other samples which an existing robot could be handling before or after the new sample. Otherwise, \(y\) must increase by \(1\) if and only if there are no other samples amongst node \(a\)'s descendants — if there are, then the robot handling node \(a\)'s samples must already be tied up with handling another existing sample, while if there aren't, then that robot can come handle the new sample. When a sample gets removed, essentially identical logic can be applied to determine whether \(y\) should decrease by \(1\) or remain unchanged. What remains is implementing all of the pieces described above efficiently, which can be accomplished with a variety of relatively standard tree algorithms: - To find the tree node which contains a given sample, we can start by precomputing a list of the 1st, 2nd, 4th, 8th, etc. ancestors of each node in \(O(RC \log(RC))\) time. Starting at the relevant platform's leaf node, we can then perform \(O(\log(RC))\) jumps up these lists (based on nodes' relevant height intervals) to arrive at the appropriate node. - To determine whether or not a subtree contains any samples, we can start by traversing the tree in pre-order and precomputing each node's subtree's interval of pre-order indices in \(O(RC)\) time. Assuming we maintain a multiset of pre-order indices of nodes containing samples, we can then query whether this multiset contains elements in appropriate intervals. Each multiset insertion, deletion, and query takes \(O(\log(RC))\) time. - To find a node's closest ancestor containing any samples, we can start by performing a [heavy-light decomposition](https://en.wikipedia.org/wiki/Heavy_path_decomposition) of the tree in \(O(RC)\) time. Assuming we similarly maintain a multiset of indices of nodes containing samples on each heavy path (also possible to implement as a single multiset reused across all heavy paths), we can then traverse individual nodes and heavy paths upwards from a given node while querying their multisets for the earliest relevant element. Each multiset insertion, deletion, and query takes \(O(\log(RC))\) time, and \(O(\log(RC))\) queries might be performed each time. A slightly faster but more complicated approach is to use splay trees with preferred paths in the same style as link-cut trees instead of HLD, which allows us to save a \(\log(RC)\) factor. The time complexity of this algorithm comes out to \(O((RC+K) \log^2(RC))\), or just \(O((RC+K)\log(RC))\) if implemented with splay trees. [See David Harmeyer (SecondThread)'s solution video here.](https://youtu.be/w6Xvy0c876o?t=524)