contest_id
stringlengths
1
4
index
stringclasses
43 values
title
stringlengths
2
63
statement
stringlengths
51
4.24k
tutorial
stringlengths
19
20.4k
tags
listlengths
0
11
rating
int64
800
3.5k
code
stringlengths
46
29.6k
2114
E
Kirei Attacks the Estate
Once, Kirei stealthily infiltrated the trap-filled estate of the Ainzbern family but was discovered by Kiritugu's familiar. Assessing his strength, Kirei decided to retreat. The estate is represented as a tree with $n$ vertices, with the \textbf{root} at vertex $1$. Each vertex of the tree has a number $a_i$ recorded, which represents the danger of vertex $i$. Recall that a tree is a connected undirected graph without cycles. For a successful retreat, Kirei must compute the threat value for each vertex. The threat of a vertex is equal to the \textbf{maximum} alternating sum along the vertical path starting from that vertex. The alternating sum along the vertical path starting from vertex $i$ is defined as $a_i - a_{p_i} + a_{p_{p_i}} - \ldots$, where $p_i$ is the parent of vertex $i$ on the path to the root (to vertex $1$). For example, in the tree below, vertex $4$ has the following vertical paths: - $[4]$ with an alternating sum of $a_4 = 6$; - $[4, 3]$ with an alternating sum of $a_4 - a_3 = 6 - 2 = 4$; - $[4, 3, 2]$ with an alternating sum of $a_4 - a_3 + a_2 = 6 - 2 + 5 = 9$; - $[4, 3, 2, 1]$ with an alternating sum of $a_4 - a_3 + a_2 - a_1 = 6 - 2 + 5 - 4 = 5$. \begin{center} {\small The dangers of the vertices are indicated in red.} \end{center} Help Kirei compute the threat values for all vertices and escape the estate.
Let's consider each vertex separately. The sign with which its danger enters the sign-variable sum depends on the parity of its index on the path. When we calculate $f(v)$ - the maximum value of the threat of the vertex, we can either stay in it or subtract the minimum value of the threat of our parent: $f(v) = \max(a_v, a_v - g(p_v))$; $g(v) = \min(a_v, a_v, - f(p_v))$. It is important not to forget to handle the case with the root: $f(1) = g(1) = a_1$. Thus, the values of $f$ and $g$ are computed trivially by performing a depth-first traversal of the tree. The array of values $f$ is the answer to the problem.
[ "dfs and similar", "dp", "greedy", "trees" ]
1,400
from math import inf from sys import setrecursionlimit def solve(v, p, mini, maxi): global res res[v] = max(arr[v], mini * -1 + arr[v]) mini = min(arr[v], maxi * -1 + arr[v]) for u in gr[v]: if u == p: continue solve(u, v, mini, res[v]) setrecursionlimit(400_000) t = int(input()) for _ in range(t): n = int(input()) arr = list(map(int, input().split())) gr = [[] for _ in range(n)] for j in range(n - 1): v, u = map(int, input().split()) gr[v - 1].append(u - 1) gr[u - 1].append(v - 1) res = [0] * n solve(0, -1, 0, 0) print(*res)
2114
F
Small Operations
Given an integer $x$ and an integer $k$. In one operation, you can perform one of two actions: - choose an integer $1 \le a \le k$ and assign $x = x \cdot a$; - choose an integer $1 \le a \le k$ and assign $x = \frac{x}{a}$, where the value of $\frac{x}{a}$ must be an integer. Find the minimum number of operations required to make the number $x$ equal to $y$, or determine that it is impossible.
It is not difficult to guess that first we need to use operations of the second type to make $x$ equal to $gcd(x, y)$, and only then use operations of the first type to make it equal to $y$. That is, now we need to decompose the numbers $x/gcd(x, y)$ and $y/gcd(x, y)$ into the minimum number of factors not exceeding $k$. We will learn to find this quantity for an arbitrary number $a$. We will use dynamic programming, let $dp[i]$ be the minimum number of factors not exceeding $k$ into which we can decompose the number $i$. To find this quantity for $i$, we will iterate over the number $j$, such that $i$ is divisible by $j$ and $\frac{i}{j} \le k$, and we will update the value $dp[i] = \min(dp[i], dp[j] + 1)$. It is easy to notice that this approach works in $\mathcal{O}(a^2)$, which is too slow. To make our dynamic programming faster, we note that values of $i$ that are not divisors of $a$ are not needed. We will find all divisors of $a$, and their count is denoted as $d(a)$. For supercomposite numbers, it can be estimated as $\mathcal{O}(\sqrt[3]{a})$, so by using only the divisors as states, we will achieve an asymptotic complexity of $O(a^\frac{2}{3})$.
[ "binary search", "brute force", "dfs and similar", "dp", "math", "number theory", "sortings" ]
2,000
#include <bits/stdc++.h> #define int long long #define pb emplace_back #define mp make_pair #define x first #define y second #define all(a) a.begin(), a.end() #define rall(a) a.rbegin(), a.rend() typedef long double ld; typedef long long ll; using namespace std; mt19937 rnd(time(nullptr)); const int inf = 1e9; const int M = 1e9 + 7; const ld pi = atan2(0, -1); const ld eps = 1e-6; int gcd(int a, int b) { return b == 0 ? a : gcd(b, a % b); } int get_ans(int x, int k){ if(x == 1) return 0; vector<int> divs; for(int i = 1; i * i <= x; i++){ if(x % i == 0){ divs.push_back(i); divs.push_back(x / i); } } sort(all(divs)); int n = divs.size(); vector<int> dp(n, 100); dp[0] = 0; for(int i = 1; i < n; i++){ for(int j = i - 1; j >= 0; j--){ if(divs[i] / divs[j] > k){ break; } if(divs[i] % divs[j] == 0) { dp[i] = min(dp[i], dp[j] + 1); } } } return dp[n - 1] == 100 ? -1 : dp[n - 1]; } void solve(int tc){ int x, y, k; cin >> x >> y >> k; int g = gcd(x, y); x /= g; y /= g; int ax = get_ans(x, k); int ay = get_ans(y, k); if(ax == -1 || ay == -1) cout << -1; else cout << ax + ay; } bool multi = true; signed main() { int t = 1; if (multi) cin >> t; for (int i = 1; i <= t; ++i) { solve(i); cout << "\n"; } return 0; }
2114
G
Build an Array
Yesterday, Dima found an empty array and decided to add some integers to it. He can perform the following operation an unlimited number of times: - add any integer to the left or right end of the array. - then, as long as there is a pair of identical adjacent elements in the array, they will be replaced by their sum. It can be shown that there can be at most one such pair in the array at the same time. For example, if the array is $[3, 6, 4]$ and we add the number $3$ to the left, the array will first become $[3, 3, 6, 4]$, then the first two elements will be replaced by $6$, and the array will become $[6, 6, 4]$, and then — $[12, 4]$. After performing the operation \textbf{exactly} $k$ times, he thinks he has obtained an array $a$ of length $n$, but he does not remember which operations he applied. Determine if there exists a sequence of $k$ operations that could result in the given array $a$ from an empty array, or determine that it is impossible.
Let's start with a slow solution: we will iterate through the element that we will add first for as many operations as possible and will add the other elements to the left and right, using as many operations as possible. When adding a new number, we will also look at the number that was added before it from the same side. Let's denote the previous number as $b$ and the current number as $c$. We can add $c$ by adding $\frac{c}{2^x}$ for some $x$, and we will find the maximum value of $x$ for which $c$ is divisible by $2^x$. However, if we only add the values $\frac{c}{2^x}$, they may merge with $b$ if $\frac{c}{b}$ is a power of two. In that case, in order to add $c$ without merging with $b$, we must first add $2 \cdot b$, and only then as many elements $\frac{c}{2^x}$ as possible. After calculating all these values, we know the maximum possible number of operations to construct the array for a fixed starting element. Now, note that if two elements merged into one after an operation, they could have been added in one operation, meaning the number of operations can always be reduced by $1$. Thus, if the maximum is greater than $k$, it is also possible to construct the array in $k$ operations. In fact, the values for some starting elements may not be entirely accurate, since more operations can be performed when adding the first two elements, but they will be counted correctly when we process the larger of the two neighboring elements as the starting one. Now, to make our solution work in $\mathcal{O}(n \cdot \log{A})$ instead of $\mathcal{O}(n^2 \cdot \log{A})$, we note that we are calculating the sums of additions for the same $c=a_i$, $b=a_{i \pm 1}$ multiple times, so we can simply precompute these values on the prefix and suffix and find answers for fixed starting elements in $\mathcal{O}(1)$.
[ "brute force", "constructive algorithms", "dp", "greedy", "math", "number theory" ]
2,200
#include <bits/stdc++.h> #define int long long #define pb emplace_back #define mp make_pair #define x first #define y second #define all(a) a.begin(), a.end() #define rall(a) a.rbegin(), a.rend() typedef long double ld; typedef long long ll; using namespace std; mt19937 rnd(time(nullptr)); const int inf = 1e9; const int M = 1e9 + 7; const ld pi = atan2(0, -1); const ld eps = 1e-6; int max_op(int a, int b) { int min_part = a; while (min_part % 2 == 0 && min_part / 2 != b) { min_part /= 2; } if (min_part % 2 == 1) { return a / min_part; } int true_min = min_part; while (true_min % 2 == 0) { true_min /= 2; } return 1 + (a - min_part) / true_min; } void solve(int tc){ int n, k; cin >> n >> k; vector<int> a(n); for (int &e: a) cin >> e; vector<int> pre(n, 0); for (int j = 1; j < n; ++j) { pre[j] = pre[j - 1] + max_op(a[j - 1], a[j]); } vector<int> suf(n, 0); for (int j = n - 2; j >= 0; --j) { suf[j] = suf[j + 1] + max_op(a[j + 1], a[j]); } for (int i = 0; i < n; i++) { int res = max_op(a[i], 0) + pre[i] + suf[i]; if (res >= k) { cout << "YES"; return; } } cout << "NO"; } bool multi = true; signed main() { int t = 1; if (multi) cin >> t; for (int i = 1; i <= t; ++i) { solve(i); cout << "\n"; } return 0; }
2115
A
Gellyfish and Flaming Peony
Gellyfish hates math problems, but she has to finish her math homework: Gellyfish is given an array of $n$ positive integers $a_1, a_2, \ldots, a_n$. She needs to do the following two-step operation until all elements of $a$ are equal: - Select two indexes $i$, $j$ satisfying $1 \leq i, j \leq n$ and $i \neq j$. - Replace $a_i$ with $\gcd(a_i, a_j)$. Now, Gellyfish asks you for the minimum number of operations to achieve her goal. It can be proven that Gellyfish can always achieve her goal.
Try to think about why Gellyfish can always achieve her goal, and ultimately what all the elements will turn into. When you've figured out Hint 1, try using dynamic programming to reach your goal. Let $g = \gcd(a_1, a_2, \dots, a_n)$, It can be shown that eventually all elements become $g$. Consider the assumption that eventually all numbers are $x$. Then for all $i$, there is $x\ |\ a_i$; this is because as $a_i := \gcd(a_i, a_j)$, the new $a_i$ value will only be a factor of the original. It further follows that $x\ |\ g$. Further analysis reveals that $x$ cannot be less than $g$, no matter how it is manipulated. Thus we have $x = g$. Next we consider that after a certain number of operations, if there exists some $a_k$ equal to $g$. In the next operations, for each element $a_i$ that is not equal to $g$, we simply choose $j=k$ and then make $a_i := \gcd(a_i, a_k)$. After this, all elements will become $g$. If $g$ is initially in $a$, then the problem is simple; we just need to count the number of elements in $a$ that are not equal to $g$. But if the initial $g$ is not in $a$, we are actually trying to make an element equal to $g$ by minimizing the number of operations. This is not difficult to achieve through dynamic programming, using $f_x$ to indicate that it takes at least a few operations to make an element equal to $x$. The transition is simple, just enumerate $x$ from largest to smallest, and then for all $i$, use $f_{x}+1$ to update $f_{\gcd(x, a_i)}$. But computing $\gcd(x, y)$ is $O(\log x)$, a direct transition takes $O(n \max(a) \log \max(a))$ time. So we need to preprocess this. Let $h_{x, y} = \gcd(x, y)$, then there is obviously $h_{x, y} = h_{y, x \bmod y}$. Before all test cases, $h$ can be preprocessed in $O(\max(a)^2)$ time complexity. Time complexity: $O(n \max(a))$ per test case and $O(\max(a)^2)$ for preprocessing. Memory complexity: $O(n+\max(a))$ per test case and $O(\max(a)^2)$ for preprocessing. Try to solve $n, a_i \leq 2 \times 10^5$ with only one test case.
[ "constructive algorithms", "dp", "math", "number theory" ]
1,500
#include<bits/stdc++.h> using namespace std; const int N = 5000 + 5; inline void checkmax(int &x, int y){ if(y > x) x = y; } inline void checkmin(int &x, int y){ if(y < x) x = y; } int n = 0, m = 0, k = 0, a[N] = {}, f[N] = {}; int g[N][N] = {}, ans = 0; inline void solve(){ scanf("%d", &n); m = k = 0; for(int i = 1 ; i <= n ; i ++){ scanf("%d", &a[i]); k = g[k][a[i]]; } memset(f, 0x3f, sizeof(f)); for(int i = 1 ; i <= n ; i ++) a[i] /= k, checkmax(m, a[i]), f[a[i]] = 0; for(int x = m ; x >= 1 ; x --) for(int i = 1 ; i <= n ; i ++){ int y = a[i]; checkmin(f[g[x][y]], f[x] + 1); } ans = max(f[1] - 1, 0); for(int i = 1 ; i <= n ; i ++) if(a[i] > 1) ans ++; printf("%d\n", ans); } int T = 0; int main(){ for(int x = 0 ; x < N ; x ++) g[x][0] = g[0][x] = g[x][x] = x; for(int x = 1 ; x < N ; x ++) for(int y = 1 ; y < x ; y ++) g[x][y] = g[y][x] = g[y][x % y]; scanf("%d", &T); while(T --) solve(); return 0; }
2115
B
Gellyfish and Camellia Japonica
Gellyfish has an array of $n$ integers $c_1, c_2, \ldots, c_n$. In the beginning, $c = [a_1, a_2, \ldots, a_n]$. Gellyfish will make $q$ modifications to $c$. For $i = 1,2,\ldots,q$, Gellyfish is given three integers $x_i$, $y_i$, and $z_i$ between $1$ and $n$. Then Gellyfish will set $c_{z_i} := \min(c_{x_i}, c_{y_i})$. After the $q$ modifications, $c = [b_1, b_2, \ldots, b_n]$. Now Flower knows the value of $b$ and the value of the integers $x_i$, $y_i$, and $z_i$ for all $1 \leq i \leq q$, but she doesn't know the value of $a$. Flower wants to find any possible value of the array $a$ or report that no such $a$ exists. If there are multiple possible values of the array $a$, you may output any of them.
Try working backwards from the final sequence, to the initial. If you're confused about Hint 1, it's probably because the result isn't unique each time. Think carefully about whether you can just take the "tightest" result possible Let's think of the problem in another way, if we only require that for all $i$, the final value of $c_i$ is greater than or equal to $b_i$, what will be the limitations for all $a_i$? It is possible to prove that the restricted form exists as a sequence $l$. It is sufficient that $a_i \geq l_i$ for all $i$. Next we try to illustrate this thing, we need to work backward from the last operation and observe the restrictions on $c$ in each step. After the last operation, we have $c_i \geq b_i$, which means $l = b$. Consider an operation that replaces $c_z$ with $\min(c_x, c_y)$. If for a post-operation restriction of $l$, can we recover the pre-operation restriction $l'$? Let us think as follows: It is not difficult to find that for $i \notin {x, y, z}$, $l'_i = l_i$. $l'_z = 0$, because the $c_z$ before the operation is overwritten, it will not actually have any restrictions. $l'_x = \max(l_x, l_z), l'_y = \max(l_y, l_z)$. Since the new $c_z$ is the original $\min(c_x, c_y)$, it can be found that for the original $c_x$, we have $c_x \geq \min(c_x, c_y) = c_z \geq l_z$, while for $y$ is symmetric. We have thus proved that, according to the eventually obtained $l$, $a_i \geq l_i$ for all $i$ is a sufficient condition to eventually make all $c_i \geq b_i$ for all $i$ . And ultimately all $c_i = b_i$, thus for all $i$, $a_i \geq l_i$ is necessary. In fact, we could just take $a=l$ and do all the operations sequentially to see if we end up with $c=b$. This is because as $a$ decreases, eventually $c$ decreases as well, and by the $l$ guarantee there is always $c_i \geq b_i$, so in effect we are trying to minimize all of $c$, and then minimizing all of $a$ is clearly an optimal decision. Time complexity: $O(n + q)$ per test case. Memory complexity: $O(n + q)$ per test case.
[ "brute force", "constructive algorithms", "dfs and similar", "dp", "graphs", "greedy", "trees" ]
2,100
#include<bits/stdc++.h> using namespace std; const int N = 3e5 + 5; int n = 0, q = 0, a[N] = {}, b[N] = {}, c[N] = {}; int x[N] = {}, y[N] = {}, z[N] = {}; inline void init(){ for(int i = 1 ; i <= n ; i ++) a[i] = b[i] = c[i] = 0; for(int i = 1 ; i <= q ; i ++) x[i] = y[i] = z[i] = 0; n = q = 0; } inline void solve(){ cin >> n >> q; for(int i = 1 ; i <= n ; i ++){ cin >> b[i]; c[i] = b[i]; } for(int i = 1 ; i <= q ; i ++) cin >> x[i] >> y[i] >> z[i]; for(int i = q ; i >= 1 ; i --){ int v = c[z[i]]; c[z[i]] = 0; c[x[i]] = max(c[x[i]], v), c[y[i]] = max(c[y[i]], v); } for(int i = 1 ; i <= n ; i ++) a[i] = c[i]; for(int i = 1 ; i <= q ; i ++) c[z[i]] = min(c[x[i]], c[y[i]]); for(int i = 1 ; i <= n ; i ++) if(b[i] != c[i]){ cout << "-1\n"; return; } for(int i = 1 ; i <= n ; i ++) cout << a[i] << ' '; cout << '\n'; } int T = 0; int main(){ ios :: sync_with_stdio(0); cin.tie(0), cout.tie(0); cin >> T; for(int i = 0 ; i < T ; i ++) init(), solve(); return 0; }
2115
C
Gellyfish and Eternal Violet
There are $n$ monsters, numbered from $1$ to $n$, in front of Gellyfish. The HP of the $i$-th monster is $h_i$. Gellyfish doesn't want to kill them, but she wants to keep these monsters from being a threat to her. So she wants to reduce the HP of all the monsters to exactly $1$. Now, Gellyfish, with The Sword Sharpened with Tears, is going to attack the monsters for $m$ rounds. For each round: - The Sword Sharpened with Tears shines with a probability of $p$. - Gellyfish can choose whether to attack: - If Gellyfish doesn't attack, nothing happens. - If Gellyfish chooses to attack and The Sword Sharpened with Tears shines, the HP of all the monsters will be reduced by $1$. - If Gellyfish chooses to attack and The Sword Sharpened with Tears does not shine, Gellyfish can choose one of the monsters and reduce its HP by $1$. Please note that before Gellyfish decides whether or not to attack, she will know whether the sword shines or not. Also, when the sword shines, Gellyfish can only make attacks on all the monsters and cannot make an attack on only one monster. Now, Gellyfish wants to know what the probability is that she will reach her goal if she makes choices optimally during the battle.
Try to find an $O(nm h^2)$ solution using dynamic programming. Re-examining Gellyfish's strategy, there are definitely situations where she chooses to carry out an attack. Can we divide the $m$ rounds into two phases by some nature? Considering all current monsters, if the lowest HP of the monsters is $l$, Gellyfish can only make at most $l-1$ "ranged" attacks. So for each monster, if its HP is $h_i$, then it must be subject to at least $h_i - l$ "pointing" attacks. Further, we can see that we only care about $l$ and $\sum\limits_{i=1}^n (h_i - l)$. Consider directly using $f_{i, l, x}$ to indicate that there are still $i$ rounds to go, the lowest HP of the monsters is $l$, and $x = \sum\limits_{i=1}^n (h_i - l)$, the probability that Gellyfish will reach her goal. This solves the problem within $O(nmh^2)$, but that's not enough. Consider the initial HP of all monsters, let $l = \min\limits_{i=1}^n h_i, s = \sum\limits_{i=1}^n (h_i - l)$. For the first $s$ times the sword doesn't shine, Gellyfish will obviously launch an attack. So we using $g_{i, j}$ to indicate the probability that the sword did not shine exactly $j$ times in the first $i$ rounds and the last time the sword did not shine. We then enumerate the $s$-th time that the sword didn't shine. We can divide the problem into two parts. The first half can be solved using $g$, while the second half can be solved using $f$, We just need to merge the results of the two parts, which is not difficult. When we use f to solve the second part, it is easy to see that initially all monsters have the same HP, and each "pointing" attack can directly attacks the monster with the highest HP, which doesn't make the answer worse. So we have $h_i - l \leq 1$ for any time, and the range of $x$ that we actually need is compressed to $[0, n)$. Time complexity: $O(nmh)$ per test case. Memory complexity: $O(nmh)$ per test case.
[ "combinatorics", "dp", "greedy", "math", "probabilities" ]
2,700
#include<bits/stdc++.h> using namespace std; const int N = 22, K = 4000 + 5, M = 400 + 5, Inf = 0x3f3f3f3f; inline void checkmin(double &x, double y){ if(y < x) x = y; } int n = 0, m = 0, s = 0, k = 0, p0 = 0, h[N] = {}; double p = 0, f[K][K] = {}, g[K][N][M] = {}, ans = 0; inline void init(){ for(int i = 0 ; i <= k ; i ++){ for(int c = 0 ; c < n ; c ++) for(int x = 0 ; x <= m ; x ++) g[i][c][x] = 0; for(int x = 0 ; x <= s ; x ++) f[i][x] = 0; } m = Inf, s = 0, ans = 0; } inline void solve(){ scanf("%d %d %d", &n, &k, &p0); p = 1.0 * p0 / 100; for(int i = 1 ; i <= n ; i ++){ scanf("%d", &h[i]); h[i] --; m = min(m, h[i]); } for(int i = 1 ; i <= n ; i ++) s += h[i] - m; if(s > k){ printf("0.000000\n"); return; } g[0][0][0] = 1; for(int i = 1 ; i <= k ; i ++){ g[i][0][0] = 1; for(int x = 1 ; x <= m ; x ++) g[i][0][x] = g[i - 1][0][x - 1] * p + max(g[i - 1][0][x], g[i - 1][n - 1][x - 1]) * (1 - p); for(int c = 1 ; c < n ; c ++){ g[i][c][0] = g[i - 1][c][0] * p + g[i - 1][c - 1][0] * (1 - p); for(int x = 1 ; x <= m ; x ++) g[i][c][x] = g[i - 1][c][x - 1] * p + g[i - 1][c - 1][x] * (1 - p); } } f[0][0] = 1; for(int i = 0 ; i < k ; i ++) for(int x = 0 ; x < s ; x ++){ f[i + 1][x] += f[i][x] * p; f[i + 1][x + 1] += f[i][x] * (1 - p); } for(int i = s ; i <= k ; i ++){ double r = 0; for(int x = 0 ; x <= min(i - s, m) ; x ++) r = max(r, g[k - i][0][m - x]); ans += r * f[i][s]; } printf("%.6lf\n", ans); } int T = 0; int main(){ scanf("%d", &T); while(T --) init(), solve(); return 0; }
2115
D
Gellyfish and Forget-Me-Not
Gellyfish and Flower are playing a game. The game consists of two arrays of $n$ integers $a_1,a_2,\ldots,a_n$ and $b_1,b_2,\ldots,b_n$, along with a binary string $c_1c_2\ldots c_n$ of length $n$. There is also an integer $x$ which is initialized to $0$. The game consists of $n$ rounds. For $i = 1,2,\ldots,n$, the round proceeds as follows: - If $c_i = \mathtt{0}$, Gellyfish will be the active player. Otherwise, if $c_i = \mathtt{1}$, Flower will be the active player. - The active player will perform \textbf{exactly one} of the following operations: - Set $x:=x \oplus a_i$. - Set $x:=x \oplus b_i$. Here, $\oplus$ denotes the bitwise XOR operation. Gellyfish wants to minimize the final value of $ x $ after $ n $ rounds, while Flower wants to maximize it. Find the final value of $ x $ after all $ n $ rounds if both players play optimally.
Consider if $c$ consists only of $0$, this problem turned out to be another classic problem. So you need at least something that you know what it is. "linear basis" is the answer to Hint 1. Please try to understand this: all addition operations are interpreted as XOR operations. We can assume that the initial value of $x$ is $\sum a_i$. In each step, we can choose to add $c_i = a_i + b_i$ to $x$, or do nothing. Each suffix of the sequence can be seen as a subproblem, so we prove inductively that the answer, as a function $f(x)$ of the initial value of $x$, is an affine transformation, i.e., $f(x) = Ax + b$. When $n = 0$, this is trivial: $f(x) = x$. When $n > 1$, we can choose to add $c_i$ to $x$ or not. Let $f(x)$ denote the answer function from the second operation onward. The two possible outcomes correspond to: Not choosing $c_i$: result is $f(x)$ Choosing $c_i$: result is $f(x + c_i) = f(x) + f(c_i) + b$. Although we don't know the exact value of $x$, $f(c_i) + b$ is a constant. Suppose the highest set bit in the binary representation of $f(c_i) + b$ is at position $k$. Then, the decision of whether to apply this operation depends only on: Whether we want to maximize or minimize the final answer Whether the $k$-th bit of $f(x)$ is $0$ or $1$ It is easy to observe that the new function $g(x)$ (before this decision) remains a affine transformation, satisfying $g(x) = g(x + c_i)$. Based on the above process, we can represent the answer function $f(x)$ using a orthogonal basis. Each element in the orthogonal basis represents a vector in the null space of $A$. Additionally, we keep a tag for each vector, indicating whether it is used to increase or decrease the value of $x$. In each iteration, we try to insert $c_i$ into the linear basis, and associate it with a tag depending on the index $i$. Time complexity: $O(n \log \max(a, b, x))$ per test case. Memory complexity: $O(n + \log \max(a, b, x))$ per test case.
[ "bitmasks", "dp", "games", "greedy", "math" ]
2,900
#include <bits/stdc++.h> using i64 = long long; constexpr int L = 60; int main() { std::ios::sync_with_stdio(false), std::cin.tie(0); int T; for(std::cin >> T; T; T --) { int n; std::cin >> n; std::vector<i64> a(n), b(n); std::string str; i64 all = 0; for(auto &x : a) std::cin >> x, all ^= x; for(int i = 0; i < n; i ++) { std::cin >> b[i]; b[i] ^= a[i]; } std::cin >> str; std::vector<i64> bas(L); std::vector<int> bel(L, -1); i64 ans = 0; for(int i = n - 1; i >= 0; i --) { i64 x = b[i], col = str[i] - '0'; for(int i = L - 1; i >= 0; i --) if(x >> i & 1) { if(bas[i]) { x ^= bas[i]; } else { for(int j = i - 1; j >= 0; j --) if(x >> j & 1) { x ^= bas[j]; } bas[i] = x; for(int j = L - 1; j > i; j --) if(bas[j] >> i & 1){ bas[j] ^= bas[i]; } bel[i] = col; break; } } } for(int i = L - 1; i >= 0; i --) if(all >> i & 1) { all ^= bas[i]; } for(int i = 0; i < L; i ++) if(bel[i] == 1) ans ^= bas[i]; std::cout << (all ^ ans) << '\n'; } return 0; }
2115
E
Gellyfish and Mayflower
\begin{quote} Mayflower by Plum \end{quote} May, Gellyfish's friend, loves playing a game called "Inscryption" which is played on a directed acyclic graph with $n$ vertices and $m$ edges. All edges $ a \rightarrow b$ satisfy $a<b$. You start in vertex $1$ with some coins. You need to move from vertex $1$ to the vertex where the boss is located along the directed edges, and then fight with the final boss. Each of the $n$ vertices of the graph contains a Trader who will sell you a card with power $w_i$ for $c_i$ coins. You can buy as many cards as you want from each Trader. However, you can only trade with the trader on the $i$-th vertex if you are currently on the $i$-th vertex. In order to defeat the boss, you want the sum of the power of your cards to be as large as possible. You will have to answer the following $q$ queries: - Given integers $p$ and $r$. If the final boss is located at vertex $p$, and you have $r$ coins in the beginning, what is the maximum sum of the power of your cards when you fight the final boss? Note that you are allowed to trade cards on vertex $p$.
There is an easy way to solve the problem in $O(m \max(r) + q)$ time complexity. Thus for cases with small $r$, we can easily solve them, but what about cases with large $r$? There is a classic but mistaken greed where we only take the item with the largest $\frac w c$. This is obviously wrong, but Hint 1 lets us rule out the case where r is small; is there an efficient algorithm that can fix this greed for larger $r$? Let $s$ be any path from vertex $1$ to vertex $p$, the vertices that pass through in turn are $s_1, s_2, \dots, s_k$. Let $z$ be the vertex in $s$ with the largest $\frac {w} {c}$, and $C = c_z, W = w_z$. Let's call your cards you don't get from vertex $z$ as special cards. Lemma1. There exists an optimal solution such that the number of special cards does not exceed $C$. Proof. Let $c'_1, c'_2, \dots, c'_{k'}$ be the cost of the special cards, and $p'_i = \sum_{j=1}^i {c'}_j$. If there exists $0 \leq l < r \leq k',\ p'_l \equiv p'_r \bmod C$, we will get that $\sum_{i=l'+1}^{r'} p_i \equiv 0 \bmod C$. Then we can replaces these cards with $\frac {\sum_{i=l'+1}^{r'} p_i} C$ cards from vertex $z$, the answer won't be worse. Since there are only $C+1$ values of $x \bmod C$ for all non-negative integers $x$, there are no more than $C$ special cards. So it's not hard to find the total cost of special cards won't exceed $\max(c)^2$. Now we can use dynamic programming to solve this problem: $dp(u, v, x, 0/1)$ means we are current at vertex $u$, the vertex on the path with the largest $\frac {w} {c}$ is $v$, the total cost of the cards is $x$, we have reached the vertex $v$ or not, the maximum sum of the power of the cards. Since the remainder will be filled by cards from $v$, We just need to find the value of $dp(u, v, x)$ that satisfies $1 \leq x \leq \max(c)^2$. But unfortunately, the time complexity of the algorithm is $O(mn\max(c)^2)$. It's not fast enough. At this point you'll find that solving the problem directly becomes incredibly tricky, so we'll try to split it into two problems. We first consider the following problem: whether there is a better solution when $r$ is sufficiently large? We need to broaden the problem, so we try to be able to buy a negative number of cards with the largest $\frac {w} {c}$. Doing so would make the answer larger, but when $r$ is large enough, the answer won't change. Because according to Lemma1, if the total cost of special cards exceed $\max(c)^2$, there will be a solution that's at least not worse. Thus when $r > \max(c)^2$, that's the answer of the problem. And we can use another dynamic programming to solve this problem: $g(u, v, x, 0/1)$ means we are current at vertex $u$, the vertex on the path with the largest $\frac {w} {c}$ is $v$, the total cost of the cards is $x$, we have reached the vertex $v$ or not, the maximum sum of the power of the cards. But unlike the original, when $x$ is equal to or greater than $c_v$, we remove several cards from vertex $v$ to make $0 \leq x < c_v$. The time complexity becomes $O(mn \max(c))$, now it's fast enough. For each query, we can just enumerate $v$ in $O(n)$ time complexity. As for $r \leq max(c)^2$? It's an easy problem: $f(u, x)$ means we are current at vertex $u$, the total cost of the cards is $x$, the maximum sum of the power of the cards. The time complexity is $O(m \max(c)^2)$. For each query, we can get the answer directly from $f$ in $O(1)$ time complexity. Over all, we have solved the problem. Time complexity: $O(mn \max(c) + m \max(c)^2 + qn)$ Memory complexity: $O(n^2 \max(c) + n \max(c)^2)$
[ "dp", "graphs" ]
3,500
#include<bits/stdc++.h> using namespace std; typedef long long ll; const ll N = 200 + 5, Inf = 0xcfcfcfcfcfcfcfcf; inline ll sqr(ll x){ return x * x; } inline ll gcd(ll x, ll y){ if(y) return gcd(y, x % y); else return x; } inline void checkmax(ll &x, ll y){ if(y > x) x = y; } ll n = 0, m = 0, magic = 0, w[N] = {}, c[N] = {}; ll f[N][N * N] = {}, g[N][N][N][2] = {}; vector<vector<ll> > G(N); inline void main_min(){ memset(f, 0xcf, sizeof(f)); for(ll x = 0 ; x <= magic ; x ++) f[1][x] = 0; for(ll u = 1 ; u <= n ; u ++){ for(ll x = 0 ; x + c[u] <= magic ; x ++) checkmax(f[u][x + c[u]], f[u][x] + w[u]); for(ll v : G[u]) for(ll x = 0 ; x <= magic ; x ++) checkmax(f[v][x], f[u][x]); } } inline void solve_max(ll i){ ll a = c[i], b = w[i]; for(ll x = 0 ; x < a ; x ++) g[i][1][x][0] = 0; for(ll u = 1 ; u <= n ; u ++){ if(u == i) for(ll x = 0 ; x < a ; x ++) checkmax(g[i][u][x][1], g[i][u][x][0]); else if(w[u] * a > c[u] * b) memset(g[i][u], 0xcf, sizeof(g[i][u])); for(ll s = 0, k = gcd(c[u], a) ; s < k ; s ++) for(ll x = s, t = 0 ; t < 2 * (a / k) ; x = (x + c[u]) % a, t ++){ checkmax(g[i][u][(x + c[u]) % a][0], g[i][u][x][0] + w[u] - ((x + c[u]) / a) * b); checkmax(g[i][u][(x + c[u]) % a][1], g[i][u][x][1] + w[u] - ((x + c[u]) / a) * b); } for(ll v : G[u]) for(ll x = 0 ; x < a ; x ++){ checkmax(g[i][v][x][0], g[i][u][x][0]); checkmax(g[i][v][x][1], g[i][u][x][1]); } } } inline void main_max(){ memset(g, 0xcf, sizeof(g)); for(ll i = 1 ; i <= n ; i ++) solve_max(i); } int main(){ scanf("%lld %lld", &n, &m); for(ll i = 1 ; i <= n ; i ++){ scanf("%lld %lld", &c[i], &w[i]); magic = max(magic, sqr(c[i])); } for(ll i = 1, u = 0, v = 0 ; i <= m ; i ++){ scanf("%lld %lld", &u, &v); G[u].push_back(v); } main_min(), main_max(); ll q = 0, p = 0, r = 0; scanf("%lld", &q); while(q --){ scanf("%lld %lld", &p, &r); if(r <= magic) printf("%lld\n", f[p][r]); else{ ll ans = Inf; for(ll i = 1 ; i <= n ; i ++){ ll a = c[i], b = w[i]; checkmax(ans, g[i][p][r % a][1] + (r / a) * b); } printf("%lld\n", ans); } } return 0; }
2115
F1
Gellyfish and Lycoris Radiata (Easy Version)
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, the time limit and the constraints on $n$ and $q$ are lower. You can hack only if you solved all versions of this problem.} Gellyfish has an array consisting of $n$ sets. Initially, all the sets are empty. Now Gellyfish will do $q$ operations. Each operation contains one modification operation and one query operation, for the $i$-th ($1 \leq i \leq q$) operation: First, there will be a modification operation, which is one of the following: - \textbf{Insert} operation: You are given an integer $r$. For the $1$-th to $r$-th sets, insert element $i$. Note that the element inserted here is $i$, the index of the operation, not the index of the set. - \textbf{Reverse} operation: You are given an integer $r$. Reverse the $1$-th to $r$-th sets. - \textbf{Delete} operation: You are given an integer $x$. Delete element $x$ from all sets that contain $x$. Followed by a query operation: - \textbf{Query} operation: You are given an integer $p$. Output the smallest element in the $p$-th set (If the $p$-th set is empty, the answer is considered to be $0$). Now, Flower needs to provide the answer for each query operation. Please help her! \textbf{Additional constraint on the problem}: Gellyfish will only give the next operation after Flower has answered the previous query operation. That is, you need to solve this problem \textbf{online}. Please refer to the input format for more details.
We apply block decomposition to the operations, dividing every $B$ operations into a single round. Within each round, the sequence is partitioned into $O(B)$ segments. For each segment, we maintain a queue that records all elements added to that segment during the round. Type 1 and 2 operations can be handled directly by pushing into the appropriate segment's queue or toggling a reversal flag. Type 3 (deletion) is handled by marking the element $x$ as deleted without immediately removing it from queues. At the end of each round, we rebuild the sequence. Specifically, for each position in the sequence, we record which segment it belongs to, and treat the position as containing all elements currently in that segment's queue. For queries, we process the contribution from each round one by one. In each round: Identify the segment that contains position $p$. Iterate through the queue of that segment. For each element, if it has been marked as deleted, remove it from the front of the queue and continue; otherwise, consider it as present. Since each element is inserted and deleted at most once per segment, and each segment has $O(1)$ amortized processing per round, the total cost per query remains efficient. We choose $B = \sqrt{q}$, resulting in $O(\sqrt{q})$ rounds in total. The total time and space complexity is: $O((n + q)\sqrt{q})$
[ "data structures" ]
3,500
#pragma GCC optimize(2) #pragma GCC optimize("Ofast") #pragma GCC optimize("inline","fast-math","unroll-loops","no-stack-protector") #pragma GCC diagnostic error "-fwhole-program" #pragma GCC diagnostic error "-fcse-skip-blocks" #pragma GCC diagnostic error "-funsafe-loop-optimizations" // MagicDark #include <bits/stdc++.h> #define debug cerr << "\33[32m[" << __LINE__ << "]\33[m " #define SZ(x) ((int) x.size() - 1) #define all(x) x.begin(), x.end() #define ms(x, y) memset(x, y, sizeof x) #define F(i, x, y) for (int i = (x); i <= (y); i++) #define DF(i, x, y) for (int i = (x); i >= (y); i--) using namespace std; typedef long long ll; typedef unsigned long long ull; template <typename T> T& chkmax(T& x, T y) {return x = max(x, y);} template <typename T> T& chkmin(T& x, T y) {return x = min(x, y);} // template <typename T> T& read(T &x) { // x = 0; int f = 1; char c = getchar(); // for (; !isdigit(c); c = getchar()) if (c == '-') f = -f; // for (; isdigit(c); c = getchar()) x = (x << 1) + (x << 3) + (c ^ 48); // return x *= f; // } // bool be; const int N = 1e5 + 1010, B = 500, B1 = N / B + 5, B2 = B + 5; int n, q, ans, p[N], wp[N], tot, t[N], tl[N], tr[N]; bool rev[N]; bool ed[N]; // bool vv[N]; // struct Q1 { // int tl = 1, tr; // int a[B2]; // bool chk() { // return tl <= tr; // } // int front() { // return a[tl]; // } // void pop() { // tl++; // } // void push(int x) { // a[++tr] = x; // } // } tq[N]; // struct Q2 { // int tl = 1, tr; // int a[B1]; // bool chk() { // return tl <= tr; // } // int front() { // return a[tl]; // } // void pop() { // tl++; // } // void push(int x) { // a[++tr] = x; // } // } vq[N]; queue <int> tq[N], vq[N]; vector <int> cur; int qq(int x) { while (tq[x].size()) { if (!ed[tq[x].front()]) return tq[x].front(); tq[x].pop(); } return 0; } int query(int x) { int s = 0; for (int i: cur) { s += tr[i] - tl[i] + 1; if (s >= x) { int g = s - x + 1; int y; if (rev[i]) { y = p[tl[i] + g - 1]; } else { y = p[tr[i] - g + 1]; } while (vq[y].size()) { int tmp = qq(vq[y].front()); if (tmp) return tmp; vq[y].pop(); } // int tmp = qq(i); // if (~tmp) return tmp; return qq(i); } } assert(false); // return -1; } // bool ee; // int cnt = 0; signed main() { ios::sync_with_stdio(0); // don't use puts cin.tie(0), cout.tie(0); // debug << abs(&ee - &be) / 1024 / 1024 << endl; cin >> n >> q; F(i, 1, n) p[i] = i; cur.push_back(++tot); tl[1] = 1, tr[1] = n; F(i, 1, q) { int f, x, y; cin >> f >> x >> y; if (f == 3) { x = (x + ans - 1) % q + 1; } else { x = (x + ans - 1) % n + 1; } y = (y + ans - 1) % n + 1; // auto split = [&] (int x) { // if (x > n || vv[x]) return; // // vv[x] = true; // for (auto []) // }; if (f == 1) { int s = 0; for (int j: cur) { int w = tr[j] - tl[j] + 1; s += w; if (s >= x) { if (s > x) { tot++; tq[tot] = tq[j]; int g = s - x; if (rev[tot] = rev[j]) { tl[tot] = tl[j]; tr[tot] = (tl[j] += g) - 1; } else { tr[tot] = tr[j]; tl[tot] = (tr[j] -= g) + 1; } cur.insert(next(find(all(cur), j)), tot); } tq[j].push(i); break; } tq[j].push(i); } } if (f == 2) { int s = 0, sz = 0; for (int j: cur) { sz++; int w = tr[j] - tl[j] + 1; s += w; if (s >= x) { if (s > x) { tot++; tq[tot] = tq[j]; int g = s - x; if (rev[tot] = rev[j]) { tl[tot] = tl[j]; tr[tot] = (tl[j] += g) - 1; } else { tr[tot] = tr[j]; tl[tot] = (tr[j] -= g) + 1; } cur.insert(next(find(all(cur), j)), tot); } reverse(cur.begin(), cur.begin() + sz); F(j, 0, sz - 1) rev[cur[j]] ^= true; break; } } } if (f == 3) { if (x < i) ed[x] = true; } cout << (ans = query(y)) << '\n'; if (cur.size() >= B) { int cnt = 0; for (int j: cur) { if (rev[j]) { DF(k, tr[j], tl[j]) { vq[wp[++cnt] = p[k]].push(j); } } else { F(k, tl[j], tr[j]) { vq[wp[++cnt] = p[k]].push(j); } } } F(j, 1, n) { p[j] = wp[j]; } cur.clear(); cur.push_back(++tot); tl[tot] = 1, tr[tot] = n; } // for (int j: cur) { // debug << tl[j] << " " << tr[j] << " " << rev[j] << endl; // } } return 0; }
2115
F2
Gellyfish and Lycoris Radiata (Hard Version)
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, the time limit and the constraints on $n$ and $q$ are higher. You can hack only if you solved all versions of this problem.} Gellyfish has an array consisting of $n$ sets. Initially, all the sets are empty. Now Gellyfish will do $q$ operations. Each operation contains one modification operation and one query operation, for the $i$-th ($1 \leq i \leq q$) operation: First, there will be a modification operation, which is one of the following: - \textbf{Insert} operation: You are given an integer $r$. For the $1$-th to $r$-th sets, insert element $i$. Note that the element inserted here is $i$, the index of the operation, not the index of the set. - \textbf{Reverse} operation: You are given an integer $r$. Reverse the $1$-th to $r$-th sets. - \textbf{Delete} operation: You are given an integer $x$. Delete element $x$ from all sets that contain $x$. Followed by a query operation: - \textbf{Query} operation: You are given an integer $p$. Output the smallest element in the $p$-th set (If the $p$-th set is empty, the answer is considered to be $0$). Now, Flower needs to provide the answer for each query operation. Please help her! \textbf{Additional constraint on the problem}: Gellyfish will only give the next operation after Flower has answered the previous query operation. That is, you need to solve this problem \textbf{online}. Please refer to the input format for more details.
We consider using leafy persistent balanced trees to maintain the sequence. At each non-leaf node, we store a set $S_u$ as a lazy tag, indicating that every set in the subtree rooted at $u$ contains $S_u$. However, since $|S_u|$ can be large, it's difficult to push down the tag efficiently. To address this, we split each set $S_u$ into two components: $T_u$: A part of $S_u$ stored directly at node $u$ A collection of child nodes $v_{u,1}, v_{u,2}, \cdots, v_{u,k}$, each storing a subset of $S_u$ We maintain the invariant: $S_u = T_u + \sum_{i=1}^k S_{v_{u,i}}$. Thus, we make the balanced tree persistent. When we create a new node $u$ and initialize its children as $ls(u)$ and $rs(u)$, we add $u$ into both $v_{ls(u)}$ and $v_{rs(u)}$. During split and merge, we do not modify $T_u$, which ensures these operations still run in logarithmic time. Type 1 (Insert): We split the balanced tree into two parts. Let the root of the first part be $rt$, and simply add an element into $T_{rt}$. Type 2 (Reverse a segment): Just mark the root of the relevant subtree with a "reverse" flag. Type 3 (Delete): Since each element $x$ appears in only one $T$, we can directly remove it from that node's $T$. For queries, We first locate the leaf node $u$. To compute $S_u$, we need to compute all $S_{v_{u,i}}$. Due to persistence, we can guarantee: $\max(S_{v_{u,i}}) < \min(S_{v_{u,i+1}})$ Thus, we can sequentially check whether each $S_{v_{u,i}}$ is empty. We process this recursively. Every time we encounter a node $x$ with $S_x = \emptyset$ and $x$ is not part of the latest tree version, then $S_x$ will always be empty, and we can safely delete $x$. During a query, we encounter three types of nodes: Nodes with $|S_x| \ne 0$: We only encounter one such node per query - this is where we find the minimum value. Nodes with $|S_x| = 0$ and $x$ not in the latest tree: These nodes are deleted. Since the total number of persistent nodes is $O(q \log n)$, these nodes also appear at most that many times. Nodes with $|S_x| = 0$ and $x$ in the latest tree: These are ancestors of leaf $u$, so there are at most $O(\log n)$ of them per query. Why can we just do $O(\log n)$ rounds of recursion? Because we replace all the nodes we pass through on the path, so the size of the subtree doesn't change from the time the node is created to the time it is removed from the tree. Furthermore, if we recurse from node $u$ to node $v$, this means that $v$ is the parent of $u$ at least at some point on the WBLT. Since the WBLT is weight-balanced, the size of the $v$ subtree is actually at least a constant multiple of the size of the $u$ subtree, and this constant greater than $1$ will be based on your WBLT. Time complexity: $O(q \log n + n)$. Memory complexity: $O(q \log n + n)$.
[ "data structures" ]
3,500
#include <bits/stdc++.h> constexpr int N = 3e5 + 10, S = 1.1e7, SS = 2 * S; int n, q; int next[SS], val[SS], cnt; struct queue { int head, tail; void push(int x) { if(head) { next[tail] = ++ cnt, val[cnt] = x; tail = cnt; } else { head = tail = ++ cnt, val[cnt] = x; } assert(cnt < SS - 100); } void pop() {head = next[head];} int front() {return val[head];} bool check() {return head == tail;} bool empty() {return head == 0;} void clear() {head = tail = 0;} }; struct node { int ls, rs; queue fa; int val, exit; int size, rev; }a[S]; int tot; int id[N]; void pushup(int u) { a[u].size = a[a[u].ls].size + a[a[u].rs].size; } void setR(int u) { a[u].rev ^= 1; std::swap(a[u].ls, a[u].rs); } void setT(int u, int v) { a[u].fa.push(v); } void pushdown(int u) { if(a[u].rev) { setR(a[u].ls); setR(a[u].rs); a[u].rev = 0; } } int newnode() { int u = ++ tot; a[u].exit = 2; return u; } int newleaf() { int u = newnode(); a[u].size = 1; return u; } int join(int x, int y) { int u = newnode(); a[u].ls = x, a[u].rs = y; a[x].fa.push(u); a[y].fa.push(u); pushup(u); return u; } auto cut(int x) { pushdown(x); a[x].exit = 1; return std::make_pair(a[x].ls, a[x].rs); } int get_val(int u) { if(a[u].exit == 0) return 0; if(a[u].val != 0) return a[u].val; if(a[u].fa.empty()) return 0; int ans = 0; while(1) { ans = get_val(a[u].fa.front()); if(ans) return ans; if(a[u].fa.check()) break; a[u].fa.pop(); } if(a[u].exit == 1) { a[u].exit = 0; a[u].fa.pop(); a[u].fa.clear(); } return 0; } int newtag(int x) { int u = ++ tot; a[u].val = x; a[u].exit = 1; return u; } constexpr double ALPHA = 0.292; bool too_heavy(int sx, int sy) { return sy < ALPHA * (sx + sy); } int merge(int x, int y) { if(!x || !y) return x + y; if(too_heavy(a[x].size, a[y].size)) { auto [u, v] = cut(x); if(too_heavy(a[v].size + a[y].size, a[u].size)) { auto [z, w] = cut(v); return merge(merge(u, z), merge(w, y)); } else { return merge(u, merge(v, y)); } } else if(too_heavy(a[y].size, a[x].size)) { auto [u, v] = cut(y); if(too_heavy(a[u].size + a[x].size, a[v].size)) { auto [z, w] = cut(u); return merge(merge(x, z), merge(w, v)); } else { return merge(merge(x, u), v); } } else { return join(x, y); } } std::pair<int, int> split(int x, int k) { if(!x) return {0, 0}; if(!k) return {0, x}; if(k == a[x].size) return {x, 0}; auto [u, v] = cut(x); if(k <= a[u].size) { auto [w, z] = split(u, k); return {w, merge(z, v)}; } else { auto [w, z] = split(v, k - a[u].size); return {merge(u, w), z}; } } int find(int u, int k) { if(a[u].size == 1) return u; pushdown(u); if(k <= a[a[u].ls].size) return find(a[u].ls, k); else return find(a[u].rs, k - a[a[u].ls].size); } int build(int n) { if(n == 1) return newleaf(); int x = build(n / 2); int y = build(n - n / 2); return join(x, y); } int main() { std::ios::sync_with_stdio(false), std::cin.tie(0); std::cin >> n >> q; int rt = build(n); int lastans = 0; for(int i = 1; i <= q; i ++) { int o; std::cin >> o; if(o == 1) { int p; std::cin >> p; p = (p + lastans - 1) % n + 1; auto [A, B] = split(rt, p); setT(A, id[i] = newtag(i)); rt = merge(A, B); } else if(o == 2) { int p; std::cin >> p; p = (p + lastans - 1) % n + 1; auto [A, B] = split(rt, p); setR(A); rt = merge(A, B); } else if(o == 3) { int x; std::cin >> x; x = (x + lastans - 1) % q + 1; a[id[x]].exit = 0; } int p; std::cin >> p; p = (p + lastans - 1) % n + 1; int u = find(rt, p); std::cout << (lastans = get_val(u)) << '\n'; } return 0; }
2116
A
Gellyfish and Tricolor Pansy
Gellyfish and Flower are playing a game called "Duel". Gellyfish has $a$ HP, while Flower has $b$ HP. Each of them has a knight. Gellyfish's knight has $c$ HP, while Flower's knight has $d$ HP. They will play a game in rounds until one of the players wins. For $k = 1, 2, \ldots$ in this order, they will perform the following actions: - If $k$ is odd and Gellyfish's knight is alive: - Gellyfish's knight can attack Flower and reduce $b$ by $1$. If $b \leq 0$, \textbf{Gellyfish wins}. Or, - Gellyfish's knight can attack Flower's knight and reduce $d$ by $1$. If $d \leq 0$, Flower's knight dies. - If $k$ is even and Flower's knight is alive: - Flower's knight can attack Gellyfish and reduce $a$ by $1$. If $a \leq 0$, \textbf{Flower wins}. Or, - Flower's knight can attack Gellyfish's knight and reduce $c$ by $1$. If $c \leq 0$, Gellyfish's knight dies. As one of the smartest people in the world, you want to tell them who will win before the game. Assume both players play optimally. It can be proven that the game will never end in a draw. That is, one player has a strategy to end the game in a finite number of moves.
Please think carefully about what happens after the death of either knight. While a player who goes to $0$ HP will lose the game outright, when a player's knight dies, she loses the ability to attack; then in future rounds, she can only be attacked by her opponent and thus lose the game. Thus it can be found that the player herself is as important as her knights, and she loses the game when either of them becomes $0$ HP. Therefore, the optimal strategy for both of them is to attack the one with lower HP. So when $\min(a, c)$ is greater than or equal to $\min(b, d)$, Gellyfish wins, otherwise Flower wins. Time complexity: $O(1)$ per test case. Memory complexity: $O(1)$ per test case.
[ "games", "greedy" ]
800
#include<bits/stdc++.h> using namespace std; inline void solve(){ int a = 0, b = 0, c = 0, d = 0; scanf("%d %d %d %d", &a, &b, &c, &d); if(min(a, c) >= min(b, d)) printf("Gellyfish\n"); else printf("Flower\n"); } int T = 0; int main(){ scanf("%d", &T); for(int i = 0 ; i < T ; i ++) solve(); return 0; }
2116
B
Gellyfish and Baby's Breath
Flower gives Gellyfish two permutations$^{\text{∗}}$ of $[0, 1, \ldots, n-1]$: $p_0, p_1, \ldots, p_{n-1}$ and $q_0, q_1, \ldots, q_{n-1}$. Now Gellyfish wants to calculate an array $r_0,r_1,\ldots,r_{n-1}$ through the following method: - For all $i$ ($0 \leq i \leq n-1$), $r_i = \max\limits_{j=0}^{i} \left(2^{p_j} + 2^{q_{i-j}} \right)$ But since Gellyfish is very lazy, you have to help her figure out the elements of $r$. Since the elements of $r$ are very large, you are only required to output the elements of $r$ modulo $998\,244\,353$. \begin{footnotesize} $^{\text{∗}}$An array $b$ is a permutation of an array $a$ if $b$ consists of the elements of $a$ in arbitrary order. For example, $[4,2,3,4]$ is a permutation of $[3,2,4,4]$ while $[1,2,2]$ is not a permutation of $[1,2,3]$. \end{footnotesize}
How to quickly compare $2^a + 2^b$ and $2^c + 2^d$ for given integers $a, b, c, d$ is the key. We are given two permutations of $p$ and $q$, which means that each element will appear only once in both $p$ and $q$, respectively. What's the point of this? For given integers $a, b, c, d$ , if we want to compare $2^a+2^b$ and $2^c+2^d$, we actually need to compare $\max(a, b)$ with $\max(c, d)$ first, and $\min(a, b)$ with $\min(c, d)$ second. This is due to $2^k = 2^{k-1} + 2^{k-1}$; when $\max(c, d) < \max(a, b)$, $2^c + 2^d \leq 2 \times 2^{\max(c, d)} \leq 2^{\max(a, b)}$, and it's symmetric for $\max(a, b) < \max(c, d)$. So for all $i$, we only need to find $j = \arg \max\limits_{1 \leq l \leq i} p_l, k = \arg \max\limits_{1 \leq l \leq i} q_l$, then $r_i = \max(2^{p_j} + 2^{q_{i-j}}, 2^{p_{i-k}} + 2^{q_k})$. This is easily done in $O(n)$ time. Time complexity: $O(n)$ per test case. Memory complexity: $O(n)$ per test case.
[ "greedy", "math", "sortings" ]
1,300
#include<bits/stdc++.h> using namespace std; const int N = 1e5 + 5, Mod = 998244353; int n = 0, s[N] = {}, p[N] = {}, q[N] = {}, r[N] = {}; inline void solve(){ scanf("%d", &n); for(int i = 0 ; i < n ; i ++) scanf("%d", &p[i]); for(int i = 0 ; i < n ; i ++) scanf("%d", &q[i]); for(int i = 0, j = 0, k = 0 ; k < n ; k ++){ if(p[k] > p[i]) i = k; if(q[k] > q[j]) j = k; if(p[i] != q[j]){ if(p[i] > q[j]) printf("%d ", (s[p[i]] + s[q[k - i]]) % Mod); else printf("%d ", (s[q[j]] + s[p[k - j]]) % Mod); } else printf("%d ", (s[p[i]] + s[max(q[k - i], p[k - j])]) % Mod); } printf("\n"); } int T = 0; int main(){ s[0] = 1; for(int i = 1 ; i < N ; i ++) s[i] = s[i - 1] * 2 % Mod; scanf("%d", &T); while(T --) solve(); return 0; }
2117
A
False Alarm
Yousef is at the entrance of a long hallway with $n$ doors in a row, numbered from $1$ to $n$. He needs to pass through all the doors from $1$ to $n$ in order of numbering and reach the exit (past door $n$). Each door can be open or closed. If a door is open, Yousef passes through it in $1$ second. If the door is closed, Yousef can't pass through it. However, Yousef has a special button which he can use \textbf{at most once} at any moment. This button makes all closed doors become open for $x$ seconds. Your task is to determine if Yousef can pass through all the doors if he can use the button at most once.
When is the optimal time to use the button? It's not necessary to use the button when a door is already open. Therefore, it's always optimal to use the button as soon as we hit a closed door. Let's call the position of the first closed door $l$ and the position of the last closed door $r$. The length of this interval is $r - l + 1$, so we need to check that $x$ is greater than or equal to $r - l + 1$.
[ "greedy", "implementation" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n, x; cin >> n >> x; int l = 1e5, r = -1; for(int i = 0; i < n; i++) { int door; cin >> door; if(door == 1) { l = min(l, i); r = max(r, i); } } cout << (x >= r - l + 1 ? "YES" : "NO") << endl; } int main() { int t; cin >> t; while(t--) solve(); }
2117
B
Shrink
A shrink operation on an array $a$ of size $m$ is defined as follows: - Choose an index $i$ ($2 \le i \le m - 1$) such that $a_i \gt a_{i - 1}$ and $a_i \gt a_{i + 1}$. - Remove $a_i$ from the array. Define the score of a permutation$^{\text{∗}}$ $p$ as the maximum number of times that you can perform the shrink operation on $p$. Yousef has given you a single integer $n$. Construct a permutation $p$ of length $n$ with the \textbf{maximum} possible score. If there are multiple answers, you can output any of them. \begin{footnotesize} $^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
When are we unable to remove the maximum element? We can always remove the maximum element as long as it exists between two elements. How can we generalize this? In a permutation, the maximum element can only occur once. Since all other elements are guaranteed to be smaller than maximum, we can remove the maximum if there exists an element on its left and another on its right. After we remove the maximum element, another maximum appears, which we also want to remove. This process can keep going until the length of the permutation becomes $2$, since both remaining elements are on the ends and can not be removed. Therefore, any permutation with $1$ and $2$ on the ends is acceptable, since any value greater than $2$ will eventually be a maximum in between two elements.
[ "constructive algorithms" ]
800
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; for(int i = 2; i <= n; i++) cout << i << ' '; cout << 1 << endl; } int main() { int t; cin >> t; while(t--) solve(); }
2117
C
Cool Partition
Yousef has an array $a$ of size $n$. He wants to partition the array into one or more contiguous segments such that each element $a_i$ belongs to exactly one segment. A partition is called cool if, for every segment $b_j$, all elements in $b_j$ also appear in $b_{j + 1}$ (if it exists). That is, every element in a segment must also be present in the segment following it. For example, if $a = [1, 2, 2, 3, 1, 5]$, a cool partition Yousef can make is $b_1 = [1, 2]$, $b_2 = [2, 3, 1, 5]$. This is a cool partition because every element in $b_1$ (which are $1$ and $2$) also appears in $b_2$. In contrast, $b_1 = [1, 2, 2]$, $b_2 = [3, 1, 5]$ is not a cool partition, since $2$ appears in $b_1$ but not in $b_2$. Note that after partitioning the array, you do \textbf{not} change the order of the segments. Also, note that if an element appears several times in some segment $b_j$, it only needs to appear at least once in $b_{j + 1}$. Your task is to help Yousef by finding the maximum number of segments that make a cool partition.
The last segment of a valid partition must contain all distinct elements of the array. From the first hint, we can say that any segment ending at some position $r$ must contain all distinct elements of the prefix $[1, r]$. Claim: In a valid partition, if a segment ends at some position $r$, it must contain all distinct elements in the prefix $[1, r]$. Call the segments $b_1,b_2,\dots,b_k$. Notice that an element in some segment $b_j$ must also appear in $b_{j+1}, b_{j+2}, ..., b_k$. Now, let's prove by contradiction. Assume that some segment $b_j$ doesn't contain all distinct elements in $[1, r]$, where $r$ is the end position of $b_j$. Then, the distinct elements that don't appear in $b_j$ surely appear in some previous segment(s). This does not make a valid partition since at least one previous segment contains an element which $b_j$ does not have. Let $d_i$ be the number of distinct elements in the range $[1, i]$. To maximize the number of segments, we make them as small as possible. Let $r$ be the end position for the current segment, we will iterate from $r$ to the left until we find the first position $l$ such that the number of distinct elements in $[l, r] = d_r$, then increment our answer and set $r = l - 1$. We repeat until $l=1$. Similarly to the above solution, we can have a segment $[l, r]$ if and only if $[l, r]$ contains all of the same elements as $[1, r]$. We now build our segments greedily from the front, iterating through each element. Suppose the element we are currently on is $a_i$. We make the following claim: if we can end the current segment at $a_i$, we should do so. (There is one exception to this, which is covered after the proof.) Suppose we are able to end the current segment at $a_i$. Clearly, if $a_i$ is the last element of the array, we must end the segment. Otherwise, suppose that instead, it is optimal to end the segment at $a_j$ for $j>i$. Then we could just instead end the segment at $a_i$, and merge the elements from $a_{i+1}$ to $a_j$ into the next segment. This doesn't cause any issues with legality, because by assumption, it is legal to end the current segment at $a_i$, and the subsequent segment only gets bigger so it has at least as many distinct values as before. Furthermore, this does not affect the number of segments we have, so this is also an optimal solution. Now, we just end segments whenever we can. The only "special" case that needs to be taken care of, is whether or not the last segment is legal. But if it isn't, we can simply merge it into the last legal segment, which only makes this segment bigger and thus doesn't cause any issues since it has at least as many distinct values as before. It turns out that this doesn't really affect the implementation, since we just have to count the number of times we can end the segment. So how do we do this quickly? Let's keep a set of all elements in $[1, i]$, and a set of all elements in $[l, i]$. Then if these sets are the same size, we know that $[l, i]$ contains all of the same elements as $[1, i]$. If this is the case, we will end the segment at $i$. Then we will increment our answer, and set $l:=i+1$ (in the implementation, we will clear the set of elements in $[l, i]$).
[ "data structures", "greedy" ]
1,200
#include <bits/stdc++.h> using namespace std; void solve(){ int n, ans = 0; cin >> n; vector<int> a(n); for(int i=0; i<n; i++) cin >> a[i]; set<int> cur, seen; for(int i=0; i<n; i++){ cur.insert(a[i]); seen.insert(a[i]); if(cur.size() == seen.size()){ ans++; seen.clear(); } } cout << ans << '\n'; } int main(){ ios::sync_with_stdio(false); cin.tie(NULL); int t; cin >> t; while(t--) solve(); }
2117
D
Retaliation
Yousef wants to explode an array $a_1, a_2, \dots, a_n$. An array gets exploded when all of its elements become equal to zero. In one operation, Yousef can do \textbf{exactly} one of the following: - For every index $i$ in $a$, decrease $a_i$ by $i$. - For every index $i$ in $a$, decrease $a_i$ by $n - i + 1$. Your task is to help Yousef determine if it is possible to explode the array using any number of operations.
What happens when we do $1$ operation of the first type and $1$ operation of the second type? If we perform both types of the operation once, each element $a_i$ is decreased by $n + 1$. Suppose we are able to explode the array with $x$ operations of the first type and $y$ operations of the second type. Then, let's pair as many operations as we can, allowing us to perform $\min(x, y)$ pairs of operations. Now, assume we finished pairing the operations. This leaves us with some operations (possibly none) of only one type, so the array must become an arithmetic sequence; the absolute difference between two consecutive elements is the number of operations that we have to perform. We don't yet know the number of pairs of operations $\min(x, y)$ that we have to perform, since we don't know the value of $x$ and $y$, but we now know the number of extra operations we perform after pairing the operations. Since we need to have an arithmetic sequence in order to explode the array, we should verify that the difference between adjacent elements are all the same. Let's call the number of extra operations $k = |x-y|$, which is equal to the difference between adjacent elements. If the array is increasing, the extra operations must be of the first type. Otherwise, they must be of the second type. Now, let's first perform the $k$ operations on the array. Since the consecutive differences are all the same, after performing the $k$ extra operations, all elements should be equal. It remains to check that the elements are non-negative and are divisible by $n + 1$, so we can now perform pairs of operations to explode the array. Suppose we did $x$ operations of the first type and $y$ operations of the second type to explode the array. What must the value of $a_1$ be? From the first hint, we know the first element $a_1 = 1 \cdot x + n \cdot y$. What about the second element? Let's consider the first two elements. Suppose we needed $x$ operations of the first type and $y$ operations of the second type to explode the array, then we know $a_1 = 1 \cdot x + n \cdot y$. For the second element, we know $a_2 = 2 \cdot x + (n - 1) \cdot y$. Now, let's solve for $x$ and $y$. Subtracting $a_2$ from $a_1$ gives us $\begin{align*} a_1 - a_2 &= (x + n \cdot y) - (2 \cdot x + (n - 1) \cdot y) \newline &= y - x \newline \Longrightarrow x &= y - a_1 + a_2 \end{align*}$ Let's substitute $y - a_1 + a_2$ for $x$ in the first formula. $\begin{align*} a_1 &= (y - a_1 + a_2) + n \cdot y \newline \Longrightarrow y &= \frac{2 \cdot a_1 - a_2}{n + 1} \end{align*}$ Now, we have the value of $y$, which means we can easily find the value of $x$. From the formula $x = y - a_1 + a_2$, substitute $y$ with its value to get $x$. What remains is just to check if the array becomes $a_1 = a_2 = \dots = a_n = 0$ after all the operations.
[ "binary search", "math", "number theory" ]
1,200
#include <bits/stdc++.h> using namespace std; typedef long long ll; void solve() { ll n; cin >> n; vector<ll> v(n); for(auto &it : v) cin >> it; ll y = (2 * v[0] - v[1]) / (n + 1); ll x = v[1] - v[0] + y; if(y < 0 || x < 0) { cout << "NO" << endl; return; } for(int i = 0; i < n; i++) { v[i] -= x * (i + 1); v[i] -= y * (n - i); } for(int i = 0; i < n; i++) { if(v[i] != 0) { cout << "NO" << endl; return; } } cout << "YES" << endl; } int main() { int t; cin >> t; while(t--) solve(); }
2117
E
Lost Soul
You are given two integer arrays $a$ and $b$, each of length $n$. You may perform the following operation any number of times: - Choose an index $i$ $(1 \le i \le n - 1)$, and set $a_i := b_{i + 1}$, \textbf{or} set $b_i := a_{i + 1}$. \textbf{Before} performing any operations, you are allowed to choose an index $i$ $(1 \le i \le n)$ and remove both $a_i$ and $b_i$ from the arrays. This removal can be done \textbf{at most once}. Let the number of matches between two arrays $c$ and $d$ of length $m$ be the number of positions $j$ $(1 \le j \le m)$ such that $c_j = d_j$. Your task is to compute the maximum number of matches you can achieve.
If we can get a match at position $i$, then we can have at least $i$ matches. Why is this true? For a certain position $i$, given the ability to remove index $i + 1$, we can pull any value from the range $[i + 2, n]$. It's clear to see that if a match exists at position $i$, then we can repeatedly set $a_j := b_{j+1}$ and $b_j := a_{j+1}$ for all $j$, where $j$ starts as $i - 1$ and goes backwards until the start of the array. This allows us to have at least $i$ matches (including the match at position $i$). Therefore, the optimal approach is to maximize the index $i$ where $a_i = b_i$. Suppose we can't remove any element. Then, for some $a_i$, we can set it to any $b_j$ such that $j \pmod 2 \neq i \pmod 2$ and $j \gt i$, or set it to any $a_j$ such that $j \pmod 2 = i \pmod 2$ and $j \gt i$. However, if we wanted to set $a_i$ to some $b_j$ where $j \pmod 2 = i \pmod 2$, then we can just remove index $i + 1$ from the array. By similar reasoning, we can set $a_i$ to any $a_j$ where $j \pmod 2 \neq i \pmod 2$ and $j \gt i+1$. Notice that we removed index $i + 1$, so we can't set $a_i := a_{i+1}$. Now, we know that we can set any $a_i$ to any $a_j$ or $b_j$ such that $j \gt i + 1$. This can be generalized for $b_i$ as well. So the solution is that, for each position $i$, we want to check if any $a_j$ or $b_j$, such that $j > i + 1$, matches $a_i$ or $b_i$, allowing us to get a match at position $i$. We also need to check if position $i$ already contains a match, or if $a_i = a_{i + 1}$ or $b_i = b_{i + 1}$.
[ "brute force", "greedy" ]
1,600
#include <bits/stdc++.h> using namespace std; void solve() { int n; cin >> n; vector<int> a(n), b(n); for(auto &it : a) cin >> it; for(auto &it : b) cin >> it; vector<bool> seen(n + 1); if(a.back() == b.back()) { cout << n << endl; return; } int ans = -1; for(int i = n - 2; i >= 0; i--) { if(a[i] == b[i] || a[i] == a[i + 1] || b[i] == b[i + 1] || seen[a[i]] || seen[b[i]]) { ans = i; break; } seen[a[i + 1]] = seen[b[i + 1]] = true; } cout << ans + 1 << endl; } int main() { int t; cin >> t; while(t--) solve(); }
2117
F
Wildflower
Yousef has a rooted tree$^{\text{∗}}$ consisting of exactly $n$ vertices, which is rooted at vertex $1$. You would like to give Yousef an array $a$ of length $n$, where each $a_i$ $(1 \le i \le n)$ \textbf{can either be $1$ or $2$}. Let $s_u$ denote the sum of $a_v$ where vertex $v$ is in the subtree$^{\text{†}}$ of vertex $u$. Yousef considers the tree special if all the values in $s$ are \textbf{pairwise distinct} (i.e., all subtree sums are unique). Your task is to help Yousef count the number of different arrays $a$ that result in the tree being special. Two arrays $b$ and $c$ are different if there exists an index $i$ such that $b_i \neq c_i$. As the result can be very large, you should print it modulo $10^9 + 7$. \begin{footnotesize} $^{\text{∗}}$A tree is a connected undirected graph with $n - 1$ edges. $^{\text{†}}$The subtree of a vertex $v$ is the set of all vertices that pass through $v$ on a simple path to the root. Note that vertex $v$ is also included in the set. \end{footnotesize}
There can only be at most $2$ leaves. This means the tree can either be a chain or Y-shaped. If the tree is a chain, subtree sums will always be strictly increasing as you move towards the root, which means that the answer for any chain is $2^n$. Claim: For an answer to exist, the tree should have at most $2$ leaves. We can use pigeonhole principle. Suppose the tree has more than $2$ leaves. Then for each leaf $\ell$, its subtree sum is $a_\ell \in \lbrace 1,2 \rbrace$. By pigeonhole principle, among three leaves there must be two with the same value, forcing subtree sums to not be unique. Thus, it is impossible to make all values in $s$ pairwise distinct if there are $3$ or more leaves. We only need to consider two cases, the case of $1$ leaf and the case of $2$ leaves. $1$ leaf case: We can see that $\lbrace 1,2 \rbrace$ are positive values, so $s$ is strictly increasing as you move towards the root. Therefore, any array $a$ will be valid. The answer for this case would be $2^n$. $2$ leaves case: Let $v$ be the lowest common ancestor between leaf $x$ and leaf $y$. From $v$ going up towards the root, we can see that it follows the same idea of the $1$ leaf case, which means that any vertex in the path between the root and $v$ can be assigned to either $1$ or $2$. Without loss of generality, suppose leaf $x$ has smaller depth than leaf $y$. Say we assigned $a_x = 1$ and $a_y = 2$; it is easy to see that we are forced to assign vertices to $2$ as we're going up, until we eventually finish assigning a branch. So the number of vertices that are free to be assigned to either $1$ or $2$ in $y$'s branch is simply $\texttt{depth}_y - \texttt{depth}_x$. If we assign $a_x = 2$ and $a_y = 1$ instead, then $y$ would have one more vertex forced to be assigned to $2$, meaning there are only $\texttt{depth}_y - \texttt{depth}_x - 1$ free vertices in $y$'s branch. Finally, the last case to be checked is if $\texttt{depth}_x = \texttt{depth}_y$. In this case, whether we assign $x$ to $1$ and $y$ to $2$, or we assign $x$ to $2$ and $y$ to $1$, all vertices in both branches are forced, so there are no free vertices in either branch. Let the number of leaves be $\texttt{cnt}$. Then we have that: $\texttt{Answer}$ = $\begin{cases} 0 &\text{if }\texttt{cnt} > 2 \newline 2^n &\text{if }\texttt{cnt} = 1 \newline (2^{\texttt{depth}_y - \texttt{depth}_x} + 2^{\texttt{depth}_y - \texttt{depth}_x - 1}) \cdot 2^{\texttt{depth}_v} &\text{if }\texttt{cnt} = 2\text{ and }\texttt{depth}_x < \texttt{depth}_y \newline 2^{\texttt{depth}_v} + 2^{\texttt{depth}_v} &\text{if }\texttt{cnt} = 2\text{ and }\texttt{depth}_x = \texttt{depth}_y \end{cases}$
[ "combinatorics", "dfs and similar", "trees" ]
1,800
#include <bits/stdc++.h> using namespace std; const int N = 2e5 + 10, MOD = 1e9 + 7; #define int long long vector<int> adj[N], lens; int pw[N]; int lca; void dfs(int u, int par, int len) { if(adj[u].size() > 2) lca = len; bool leaf = true; for(int v : adj[u]) { if(v != par) { dfs(v, u, len + 1); leaf = false; } } if(leaf) lens.push_back(len); } void solve() { int n; cin >> n; for(int i = 1; i <= n; i++) adj[i].clear(); lens.clear(); lca = -1; for(int i = 0; i < n - 1; i++) { int u, v; cin >> u >> v; adj[u].push_back(v); adj[v].push_back(u); } adj[1].push_back(0); // dummy node dfs(1, 0, 1); if(lens.size() > 2) cout << 0 << endl; else if(lens.size() == 1) cout << pw[n] << endl; else { int diff = abs(lens[0] - lens[1]); int x = diff + lca; if(diff) cout << (pw[x] + pw[x - 1]) % MOD << endl; else cout << (2 * pw[x]) % MOD << endl; } } signed main() { pw[0] = 1; for(int i = 1; i < N; i++) pw[i] = (pw[i - 1] * 2) % MOD; int t; cin >> t; while(t--) solve(); }
2117
G
Omg Graph
You are given an undirected connected weighted graph. Define the cost of a path of length $k$ to be as follows: - Let the weights of all the edges on the path be $w_1,...,w_k$. - The cost of the path is $(\min_{i = 1}^{k}{w_i}) + (\max_{i=1}^{k}{w_i})$, or in other words, the maximum edge weight + the minimum edge weight. Across all paths from vertex $1$ to $n$, report the cost of the path with minimum cost. Note that the path is not necessarily simple.
Try fixing the minimum edge on the path. There are two common ways to solve this problem; one uses DSU and the other uses Dijkstra. I will explain the Dijkstra approach here: The main idea here is to pick some edge $e$ and pretend that $e$ is the minimum-weight edge, then find the smallest maximum edge weight across all paths containing $e$. This method will only ever overestimate the answer, and equality holds when we choose $e$ to be the minimum-weight edge of the optimal path, so the answer must be the minimum of these values across all possible choices. Lets define a new cost function. Instead of $cost(w_1,...,w_k) = (\min_{i = 1}^{k}{w_i}) + (\max_{i=1}^{k}{w_i})$, let's define $cost_j(w_1,...,w_k) = w_j + (\max_{i=1}^{k}{w_i})$, i.e. the weight of edge $j$ + the max weight. By the above reasoning, the minimum value of $cost_j(w_1,...,w_k)$ across all $j$ is the same as the minimum value of $cost(w_1,...,w_k)$. This function is a lot easier to minimize than the initial one. Let's iterate over each edge $j$ - denote its endpoints to be $u_j$ and $v_j$, and its weight to be $w_j$ - and find the minimum value of $cost_j(w_1,...,w_k)$. Now we just need to find the minimum possible value of $\max_{i=1}^{k}{w_i}$ across all paths that contain $j$. To do this, we just want to minimize the max weight of the path from $1$ to $u$ and from $v$ to $n$, so we have reduced the problem to finding the minimum max weight on a path from $1$ to all other nodes, and finding the minimum max weight on a path from every node to $n$. But how do we do this? The idea is very similar to Dijkstra - recall the proof that Dijkstra finds the minimum sum of weights on a path. The main idea is that whenever we pop a node from the heap, we know that the minimum cost to it has already been found. We can use the same reasoning to prove that if we change path cost from path sum to path max, then the algorithm will still work. So all we have to do is simply run this max Dijkstra variation from $1$ and from $n$. Now we just need to find the minimum value of $w_i + \max(\mathrm{cost}(1, u_i), \mathrm{cost}(v_i, n), w_i)$ across all edges $i$.
[ "brute force", "dsu", "graphs", "greedy", "shortest paths", "sortings" ]
1,900
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define debug(x) cout << #x << " = " << x << "\n"; #define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n"; mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); } ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); } void solve(){ int n, m; cin >> n >> m; vector<vector<array<int, 2>>> g(n); for (int i = 0; i < m; i++){ int u, v, w; cin >> u >> v >> w; u--; v--; g[u].push_back({v, w}); g[v].push_back({u, w}); } auto uwu = [&](int u) -> vector<ll>{ vector<ll> dis(n, 1e18); vector<bool> vis(n); priority_queue<array<ll, 2>> q; q.push({0, u}); while (q.size()){ auto [d, c] = q.top(); d = -d; q.pop(); if (vis[c]) continue; vis[c] = true; dis[c] = d; for (auto [x, w] : g[c]){ if (vis[x]) continue; q.push({-max(d, 1LL * w), x}); } } return dis; }; ll ans = 1e18; vector<ll> dis_s = uwu(0), dis_t = uwu(n - 1); for (int u = 0; u < n; u++){ for (auto [v, w] : g[u]){ ans = min(ans, {max({dis_s[u], dis_t[v], 1LL * w}) + w}); } } cout << ans << "\n"; } int main(){ ios::sync_with_stdio(false); cin.tie(0); int t; cin >> t; while (t--) solve(); }
2117
H
Incessant Rain
\textbf{Note the unusual memory limit.} Silver Wolf gives you an array $a$ of length $n$ and $q$ queries. In each query, she replaces an element in $a$. After each query, she asks you to output the maximum integer $k$ such that there exists an integer $x$ such that it is the $k$-majority of a subarray$^{\text{∗}}$ of $a$. An integer $y$ is the $k$-majority of array $b$ if $y$ appears at least $\lfloor \frac{|b|+1}{2} \rfloor +k$ times in $b$, where $|b|$ represents the length of $b$. Note that $b$ may not necessarily have a $k$-majority. \begin{footnotesize} $^{\text{∗}}$An array $b$ is a subarray of an array $a$ if $b$ can be obtained from $a$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end. \end{footnotesize}
Try to think of an offline solution. Rephrase the problem to maximum subarray sums. The unusual memory limit is meant to cut persistent segment tree solutions. Obviously it isn't perfect (I apologize if it caused issues), but it's the best I could do. Let's first pretend that $x$ in the problem is fixed. For example, if $x = 2$, then we want to find the maximum $k$ such that $2$ is a $k$-majority of some subarray. To solve this problem, let's build an array $b_1, b_2, \ldots, b_n$ such that $b_i = 1$ if $a_i = x$, and $b_i = -1$ otherwise. Let $s$ be the maximum subarray sum over all subarrays of $b$. The answer to the problem is $\lfloor \frac{s}{2} \rfloor$. Now let's consider how to handle updates. If we are asked to update index $k$ with an element $y$, we have two cases. If $a_k \neq x$ and $y=x$, then we update $b_k = 1$. Otherwise, if $a_k = x$ and $y \neq x$, then we set $b_k = -1$. Then, we can answer the query by finding the maximum subarray sum of $b$ using a segment tree. This is a standard problem. Let $b_i$ denote the sequence of arrays that correspond to the array $b$ that fixes $i$. Now let's stop fixing $x$. For each update, notice that at most two of the arrays in the sequence is modified (precisely, $b_{a_k}$ and $b_y$). To answer the query, let $\texttt{res}_x$ denote the answer maximum subarray sums of arrays $b_x$ for all $x \in [1, n]$ so far. The answer to this query is just $\lfloor \frac{\max(\texttt{res}_1, \texttt{res}_2, \ldots, \texttt{res}_n)}{2} \rfloor$. Additionally, because of the update, notice that only $\texttt{res}_{a_k}$ and $\texttt{res}_y$ might change. This motivates an approach where we track $\texttt{res}_x$ overall all $x$ and extract the maximum. For each $x$, let's keep track of all updates that modifies an index $k$ where it was $a_k = x$ originally or $a_k = x$ after the update. Let Notice that for the former case, we are setting $b_{x_k} = -1$ and in the latter case, we are setting $b_{x_k} = -1$. Notice that we are tracking at most $2q$ updates because each update modifies at most two $b_x$, as explained in the previous paragraph. Let's store all such updates in an array $v_x$. Let's try to construct $b_x$ for each $x \in [1, n]$ now. To do so, let's first create a global array $b$, where initially $b_i = -1$ over all $1 \leq i \leq n$. Then, we can iterate over all elements of $a$ where $a_j = x$ and set $b_j = 1$. Now we can iterate over the queries in $v_x$. For each query in $v_x$, we can track the maximum subarray sum of $b$ after each update. If this is the $j$'th query, then we must note that after the $j$'th query, we will update $\texttt{res}_x$. We can iterate over all $x \in [1, n]$ and perform the above process in $O((n + q)\log n))$ time total. Now, we know when to update $\texttt{res}_x$ for each $x \in [1, n]$. Finally, let's loop through all $q$ queries one last time and store all $\texttt{res}_x$ in a multiset. If the $j$'th query affects $\texttt{res}_y$, then we will erase the old $\texttt{res}_y$ from the multiset and replace it with the new one. To answer the query, we can simply extract the maximum element in the multiset. First, consider the $k$-majority for a single element $y$. By definition, for a subarray $b$, the $k$-majority is given by $\lfloor{\frac{2 \cdot \text{cnt}(y) - |b|}{2}}\rfloor$. Instead of computing this directly, we construct a new array $c$, where $c_i = 1$ if $b_i = y$ and $c_i = -1$ otherwise. Then, the $k$-majority for the element $y$ is equivalent to the maximum sum among all subarrays of $c$, floor-divided by 2. This transformation reduces our problem to finding the maximal subarray sum. Since we must efficiently handle updates, we require a dynamic data structure capable of both updates and queries. This can be accomplished using a segment tree that maintains the maximum subarray sum, maximum prefix sum, maximum suffix sum, and total sum within each node. To apply this method to all possible values of $y$, we process all queries offline. Initially, we set all elements of array $c$ to $-1$. For each distinct element $y$, we first update the array $c$ by setting $c_i = 1$ wherever $a_i$ initially equals $y$. Next, we sequentially apply all updates involving $y$, calculating the maximal $k$-majority for $y$ after each update. This procedure generates $q + n$ segments, each segment holding the maximal $k$-majority information for an element $y$. By inserting and removing these maximum values into a multiset, we efficiently retrieve the global maximal $k$-majority at every step. Giving $O((n + q)\log n)$.
[ "data structures", "divide and conquer", "sortings" ]
2,500
#include <bits/stdc++.h> using namespace std; using ll = long long; #define F first #define S second #define all(x) x.begin(), x.end() #define pb push_back #define FOR(i,a,b) for(int i = (a); i < (b); ++i) #define trav(a,x) for(auto& a: x) #define sz(x) (int)x.size() template<typename T> istream& operator>>(istream& in, vector<T>& a) {for(auto &x : a) in >> x; return in;}; struct Node { int sum; // total sum of segment int pref; // max prefix sum int suff; // max suffix sum int best; // max subarray sum Node(): sum(0), pref(0), suff(0), best(0) {} Node(int x): sum(x), pref(max(0,x)), suff(max(0,x)), best(max(0,x)) {} }; Node merge(const Node &L, const Node &R) { Node res; res.sum = L.sum + R.sum; res.pref = max(L.pref, L.sum + R.pref); res.suff = max(R.suff, R.sum + L.suff); res.best = max({ L.best, R.best, L.suff + R.pref }); return res; } struct segtree { int n; vector<Node> st; segtree(int _n) { n = _n; st.resize(4*n); build(1, 0, n-1); } void build(int p, int l, int r) { if (l == r) { st[p] = Node(-1); return; } int m = (l + r) / 2; build(p<<1, l, m); build(p<<1|1, m+1, r); st[p] = merge(st[p<<1], st[p<<1|1]); } void update(int p, int l, int r, int idx, int val) { if (l == r) { st[p] = Node(val); return; } int m = (l + r) / 2; if (idx <= m) update(p<<1, l, m, idx, val); else update(p<<1|1, m+1, r, idx, val); st[p] = merge(st[p<<1], st[p<<1|1]); } void update(int idx, int val) { update(1, 0, n-1, idx, val); } Node query(int p, int l, int r, int i, int j) { if (i > r || j < l) { Node nullnode; nullnode.sum = 0; nullnode.pref = nullnode.suff = nullnode.best = INT_MIN; return nullnode; } if (l >= i && r <= j) { return st[p]; } int m = (l + r) / 2; Node L = query(p<<1, l, m, i, j); Node R = query(p<<1|1, m+1, r, i, j); if (L.best == INT_MIN) return R; if (R.best == INT_MIN) return L; return merge(L, R); } int max_subarray() { Node res = query(1, 0, n-1, 0, n-1); return res.best; } }; void solve() { int n, q; cin >> n >> q; vector<int> a(n); cin >> a; vector<vector<int>> at(n+1); FOR(i,0,n) at[a[i]].pb(i); vector<vector<array<int, 3>>> updates(n+1); FOR(idx,0,q){ int i, x; cin >> i >> x; --i; if(x != a[i]){ updates[a[i]].pb({idx, i, -1}); a[i] = x; updates[x].pb({idx, i, 1}); } } vector<vector<int>> final_at(n+1); FOR(i,0,n) final_at[a[i]].pb(i); segtree st(n); vector<int> init(n+1); vector<vector<pair<int,int>>> change(q); FOR(x,1,n+1){ trav(i, at[x]){ st.update(i, 1); } int cur = st.max_subarray(); init[x] = cur; trav(i, updates[x]){ st.update(i[1], i[2]); int new_cur = st.max_subarray(); change[i[0]].pb({cur, new_cur}); cur = new_cur; } trav(i, final_at[x]){ st.update(i, -1); } } multiset<int> ms; FOR(i,1,n+1) ms.insert(init[i]); FOR(i,0,q){ trav(j, change[i]){ ms.erase(ms.find(j.F)); ms.insert(j.S); } cout << *prev(ms.end())/2 << " "; } cout << "\n"; } signed main() { cin.tie(0) -> sync_with_stdio(0); int t = 1; cin >> t; for(int test = 1; test <= t; test++){ solve(); } }
2118
A
Equal Subsequences
We call a bitstring$^{\text{∗}}$ perfect if it has the same number of $\mathtt{101}$ and $\mathtt{010}$ subsequences$^{\text{†}}$. Construct a perfect bitstring of length $n$ where the number of $\mathtt{1}$ characters it contains is exactly $k$. It can be proven that the construction is always possible. If there are multiple solutions, output any of them. \begin{footnotesize} $^{\text{∗}}$A bitstring is a string consisting only of the characters $\mathtt{0}$ and $\mathtt{1}$. $^{\text{†}}$A sequence $a$ is a subsequence of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly zero or all) characters. \end{footnotesize}
It is somehow difficult to count the number of such subsequences in an arbitrary string. Can you think of strings where it is trivial? The number of such subsequences can be $0$. Key observation: A bitstring where all $\mathtt{1}$ bits come before all $\mathtt{0}$ bits is perfect as it has no $\mathtt{101}$ or $\mathtt{010}$ subsequences. You can fix the number of $\mathtt{1}$ bits to be $k$ and then put $n-k$ $\mathtt{0}$ bits after them. This achieves a perfect bitstring with the number of $\mathtt{1}$ bits being $k$.
[ "constructive algorithms", "greedy" ]
800
#include <bits/stdc++.h> using namespace std; int main() { int t; cin >> t; for (int tc = 1; tc <= t; tc++) { int n, k; cin >> n >> k; for (int i = 0; i < n-k; i++) cout << 0; for (int i = 0; i < k; i++) cout << 1; cout << "\n"; } return 0; }
2118
B
Make It Permutation
There is a matrix $A$ of size $n\times n$ where $A_{i,j}=j$ for all $1 \le i,j \le n$. In one operation, you can select a row and reverse any subarray$^{\text{∗}}$ in it. Find a sequence of at most $2n$ operations such that every column will contain a permutation$^{\text{†}}$ of length $n$. It can be proven that the construction is always possible. If there are multiple solutions, output any of them. \begin{footnotesize} $^{\text{∗}}$An array $a$ is a subarray of an array $b$ if $a$ can be obtained from $b$ by deleting zero or more elements from the beginning and zero or more elements from the end. $^{\text{†}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array). \end{footnotesize}
The answer matrix will have permutations in every row and column. Can you think of such a matrix? Let our final matrix have all cyclic shifts of the identity permutation in each row. Can you perform a cyclic shift using $3$ operations? One of the operations performed on each row becomes redundant. Key observation: You can cyclic shift an array using $3$ operations. The operations to achieve this are "$1$ $n$", "$1$ $i$", "$i+1$ $n$". Perform these with a different $i$ for each row, you will obtain all cyclic shifts of the identity permutation. To optimize it to $2n$ operations observe that performing "$1$ $n$" on each row does nothing.
[ "constructive algorithms" ]
1,200
#include <bits/stdc++.h> using namespace std; using ll = long long; int main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; for (int tc = 1; tc <= t; tc++) { int n; cin >> n; cout << 2*n-1 << "\n"; for (int i = 1; i < n; i++) { cout << i << " " << 1 << " " << i << "\n"; cout << i << " " << i+1 << " " << n << "\n"; } cout << n << " 1 " << n << "\n"; } return 0; }
2118
C
Make It Beautiful
You are given an array $a$ of $n$ integers. We define the $\text{beauty}$ of a number $x$ to be the number of $1$ bits in its binary representation. We define the beauty of an array to be the sum of beauties of the numbers it contains. In one operation, you can select an index $i$ $(1 \le i \le n)$ and increase $a_i$ by $1$. Find the maximum beauty of the array after doing \textbf{at most} $k$ operations.
Think in binary. Let $x$ be an integer. What is the smallest number $y>x$ that is more beautiful than $x$? $y$ always equals $x$ with the smallest valued $0$ bit set to $1$. Key observation: To increase the beauty of a number, it is always optimal to set the least valued $0$ bit to $1$. Let our number be $x$ and call the least valued $0$ bit special. Let $y$ be $x$ with the special bit set to $1$. To increase the beauty, we must increase $x$ by at least $1$. This will set the special bit to $1$, and all bits with a smaller value to $0$. Until we reach $y$, there will always be at least one $0$ bit with a smaller value than the special bit while the bits with a higher value stay the same. This means that no number is more beautiful between $x$ and $y$, showing that the observation is correct. From this, we can construct a solution. Let us count the number of $0$ bits at each position. Iterate from the least valued bit to the highest. Greedily set one such bit to $1$ until we run out of $0$ bits or the remaining $k$ is less than the value of the bit. This solution has a time complexity $O(N\log{10^{18}})$.
[ "bitmasks", "data structures", "greedy", "math" ]
1,300
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { ll n, k; cin >> n >> k; ll ans = 0; vector<ll> a(n); for (ll &i : a) { cin >> i; ans += __builtin_popcountll(i); } for (int j = 0; j <= 60; j++) { ll bb = (1ll<<j); for (ll x : a) { if (!(x & bb) && k >= bb) { ans++; k -= bb; } } } cout << ans << "\n"; } int main() { ios::sync_with_stdio(0); cin.tie(0); int tc; cin >> tc; while (tc--) solve(); return 0; }
2118
D2
Red Light, Green Light (Hard version)
\textbf{This is the hard version of the problem. The only difference is the constraint on $k$ and the total sum of $n$ and $q$ across all test cases. You can make hacks only if both versions of the problem are solved.} You are given a strip of length $10^{15}$ and a constant $k$. There are exactly $n$ cells that contain a traffic light; each has a position $p_i$ and an initial delay $d_i$ for which $d_i < k$. The $i$-th traffic light works the following way: - it shows red at the $l \cdot k + d_i$-th second, where $l$ is an integer, - it shows green otherwise. At second $0$, you are initially positioned at some cell on the strip, facing the positive direction. At each second, you perform the following actions in order: - If the current cell contains a red traffic light, you turn around. - Move one cell in the direction you are currently facing. You are given $q$ different starting positions. For each one, determine whether you will eventually leave the strip within $10^{100}$ seconds.
Divide the problem into some subproblems: Find the next traffic light efficiently. Detect cycles efficiently. For D1 it is enough to do the first subproblem in $O(N)$, the second subproblem with $O(N)$ calls of the first subproblem. For D2 we need to do better. Try to simulate the problem by hand. Iterate over all traffic lights and check if we would collide with it if we don't turn. Then we can select the next traffic light in the correct direction. This is $O(N)$. If we collide with the same traffic light from the same direction twice than the answer is no. For detecting cycles we can store visited traffic lights that we collided together with direction. Notice that we can only collide with a specific traffic light in a unique time modulo $k$. This solution has a time complexity $O(Q*N^2)$. First, let's quickly find the next traffic light in the positive direction. Imagine movement by moving along diagonals, refer to the images in the statement for a visual representation. Group the traffic lights by $(d_i-p_i)\bmod k$, when at position $x$ at time $t$ you will collide with the group of $(t-x)\bmod k$. We can binary search the next position on this. To do the same for the anti-diagonals you can group by $(d_i+p_i)\bmod k$ and search by $(t+x)\bmod k$. We can still simulate each query, but after finding out the answer you can store for each state the result. In later queries, you can refer to the already checked states. In total, you will only check each state at most once. Alternatively, you can notice that after colliding with a traffic light in a specific direction, the next traffic light will always be the same. From this you can build a graph and detect cycles before reading queries. Both solutions have a time complexity of $O(N\log{N})$.
[ "binary search", "brute force", "data structures", "dfs and similar", "dp", "graphs", "implementation", "math", "number theory" ]
2,200
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { ll m, k; cin >> m >> k; vector<ll> p(m+1), d(m+1); for (int i = 1; i <= m; i++) cin >> p[i]; for (int i = 1; i <= m; i++) cin >> d[i]; map<ll, vector<ll>> mpl, mpr; map<ll, ll> traffic; for (int i = 1; i <= m; i++) { traffic[p[i]] = d[i]; mpl[(d[i]+p[i])%k].emplace_back(p[i]); mpr[(((d[i]-p[i])%k)+k)%k].emplace_back(p[i]); } auto get_next_left = [&](ll pos, ll t) { ll val = (t + pos) % k; auto &vec = mpl[val]; auto it = lower_bound(vec.begin(), vec.end(), pos); if (it == vec.begin()) return -1ll; it--; return *it; }; auto get_next_right = [&](ll pos, ll t) { ll val = (((t - pos) % k) + k) % k; auto &vec = mpr[val]; auto it = lower_bound(vec.begin(), vec.end(), pos+1); if (it == vec.end()) return -1ll; return *it; }; map<pair<ll, ll>, bool> dp; int q; cin >> q; for (int i = 1; i <= q; i++) { ll x; cin >> x; ll dir = 1, t = 0; set<pair<ll, ll>> states; bool ok = false; if (traffic.count(x) && traffic[x] == 0) dir ^= 1; for (int it = 0; it < 2*m; it++) { ll y = dir ? get_next_right(x, t) : get_next_left(x, t); if (y == -1) { ok = true; break; } else { t += abs(y-x); x = y; dir ^= 1; } if (states.count({x, dir})) break; states.insert({x, dir}); if (dp.count({x, dir})) { ok = dp[{x, dir}]; break; } } for (auto [a, b] : states) { dp[{a, b}] = ok; } cout << (ok?"YES\n":"NO\n"); } } int main() { ios::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) solve(); return 0; }
2118
E
Grid Coloring
There is a $n\times m$ grid with each cell initially white. You have to color all the cells one-by-one. After you color a cell, all the \textbf{colored cells} furthest from it receive a penalty. Find a coloring order, where no cell has more than $3$ penalties. \textbf{Note that $n$ and $m$ are both odd.} The distance metric used is the chessboard distance while we decide ties between cells with Manhattan distance. Formally, a cell $(x_2, y_2)$ is further away than $(x_3, y_3)$ from a cell $(x_1, y_1)$ if one of the following holds: - $\max\big(\lvert x_1 - x_2 \rvert, \lvert y_1 - y_2 \rvert\big)>\max\big(\lvert x_1 - x_3 \rvert, \lvert y_1 - y_3 \rvert\big)$ - $\max\big(\lvert x_1 - x_2 \rvert, \lvert y_1 - y_2 \rvert\big)=\max\big(\lvert x_1 - x_3 \rvert, \lvert y_1 - y_3 \rvert\big)$ \textbf{and} $\lvert x_1 - x_2 \rvert + \lvert y_1 - y_2 \rvert>\lvert x_1 - x_3 \rvert + \lvert y_1 - y_3 \rvert$ It can be proven that at least one solution always exists. \begin{center} {\small Example showing penalty changes after coloring the center of a $5 \times 5$ grid. The numbers indicate the penalty of the cells.} \end{center}
Why is it important that both $N$ and $M$ are odd? Try generalizing the trivial case when $N=1$. Try extending the solution from a smaller case... in a literal sense. Key observation: Having $N \ge M$ and N, M both odd, an $(N - 2) \times M$ grid can easily be extended into a $N \times M$ grid To achieve this first color a cell in the middle of both of the $M$ long sides. By this we ensure that the rectangle will be so wide that its side will be at maximum chessboard distance from each other. This means that any cells colored next to one of its sides can only increase the penalty of cells of the opposite side. By coloring cells alternating up and down on each side (similarly to the solution to the case $N=1$) we can ensure that every cell gets at most $2$ penalty. This only leaves the corners as corner cases (as they can be increased from two sides), but they can also be solved by choosing the order of vertical and horizontal extensions correctly.
[ "constructive algorithms", "geometry", "greedy", "math" ]
2,400
#include <bits/stdc++.h> using namespace std; using ll = long long; void solve() { int n, m; cin >> n >> m; auto print = [&](int x, int y) { if (1 <= x && x <= n && 1 <= y && y <= m) { cout << x << " " << y << "\n"; } }; int cx = (n+1)/2; int cy = (m+1)/2; int layers = max(n, m)/2; print(cx, cy); for (int i = 1; i <= layers; i++) { for (int j = 0; j < 2*i-1; j++) { int sgn = (j & 1) ? 1 : -1; print(cx+sgn*(j+1)/2, cy+i); print(cx+sgn*(j+1)/2, cy-i); } for (int j = 0; j < 2*i-1; j++) { int sgn = (j & 1) ? 1 : -1; print(cx+i, cy+sgn*(j+1)/2); print(cx-i, cy+sgn*(j+1)/2); } print(cx-i, cy-i); print(cx-i, cy+i); print(cx+i, cy-i); print(cx+i, cy+i); } } int main() { ios::sync_with_stdio(0); cin.tie(0); int tc; cin >> tc; while (tc--) solve(); }
2118
F
Shifts and Swaps
You are given arrays $a$ and $b$ of length $n$ and an integer $m$. The arrays only contain integers from $1$ to $m$, and both arrays contain all integers from $1$ to $m$. You may repeatedly perform either of the following operations on $a$: - cyclic shift$^{\text{∗}}$ the array to the left - swap two neighboring elements if their difference is at least $2$. Is it possible to transform the first array into the second? \begin{footnotesize} $^{\text{∗}}$A left cyclic shift of a zero-indexed array $p$ of length $n$ is an array $q$ such that $q_i = p_{(i + 1) \bmod n}$ for all $0 \le i < n$. \end{footnotesize}
The order of two elements only matters if the absolute value of their difference is at most 1. Try to store the positions of elements with value $v$ relative to elements with value $v+1$. (The next hint will be about what to do with these.) We want to somehow hash the relative positions. (The next hint will be about a thing you might not have known you can hash.) It is possible to hash rooted trees, see Vladosiya's blog post. Note: the structure you hopefully came up with using hint 2 will certainly not be a single rooted tree, use this algorithm more as a guide. (The next hint will be about how you can check if two strings are rotations of each other.) You can check if two strings are rotations of each other either by using KMP or hashing. We'll consider the arrays cyclic: $a_1$ and $a_n$ are also adjacent. For each array, let's build rooted trees where the children of each vertex are ordered. Each vertex corresponds to an value in an array. For each index $i$, let its children be the occurrences of the value $a_i - 1$ in order between the $i$-th element and the next occurrence of the value $a_i$. The roots of the trees will be nodes with value $m$ in an array. We hash each tree, then compare if the sequences of hashes are rotations of each other (for example using KMP). For hashing a tree, see Vladosiya's blog post. This solution has a time complexity $O(N\log{N})$ if you use a map for deterministic hashing.
[ "data structures", "graphs", "hashing", "trees" ]
3,100
#include <bits/stdc++.h> using namespace std; using ll = long long; inline int sign_of_non_zero(const int x) { return x > 0 ? 1 : -1; } struct IllegalTransformationException : public std::runtime_error { using std::runtime_error::runtime_error; }; template < std::uint64_t ELEMENT_MULTIPLIER, std::uint64_t HASH_MULTIPLIER, std::uint64_t OFFSET > inline std::uint64_t circular_hash(const std::vector<std::uint64_t>& arr) { std::uint64_t current_hash = 0; for (const std::uint64_t& elem : arr) { current_hash *= ELEMENT_MULTIPLIER; current_hash += elem; } std::uint64_t first_multiplier = 1; for (int i = 0; i + 1 < arr.size(); i++) first_multiplier *= ELEMENT_MULTIPLIER; std::vector<std::uint64_t> hashes; for (const std::uint64_t& elem : arr) { hashes.push_back(current_hash); current_hash -= first_multiplier * elem; current_hash *= ELEMENT_MULTIPLIER; current_hash += elem; } sort(hashes.begin(), hashes.end()); std::uint64_t result = 0; std::uint64_t hash_multipler = 1; for (const std::uint64_t& hash : hashes) { result += hash * hash + hash * hash_multipler + OFFSET; hash_multipler *= HASH_MULTIPLIER; } return result; } // VEC must support indexing and have `.size()`. template <typename VEC = std::vector<int>> class braid_graph { VEC braid; int strand_count; std::vector<std::vector<int>> children; template < std::uint64_t CHILD_MULTIPLIER, std::uint64_t OFFSET, std::uint64_t NEGATIVE_MULTIPLIER, std::uint64_t NEGATIVE_OFFSET > std::uint64_t hash_of_vertex(std::vector<std::uint64_t>& hashes, const int& v) const { if (hashes[v] == 0) { std::uint64_t result = 0; std::uint64_t multiplier = 1; for (const int& child : children[v]) { const std::uint64_t base_hash = hash_of_vertex<CHILD_MULTIPLIER, OFFSET, NEGATIVE_MULTIPLIER, NEGATIVE_OFFSET>(hashes, child); result += base_hash * base_hash + base_hash * multiplier + OFFSET; multiplier *= CHILD_MULTIPLIER; } if (braid[v] < 0) result = result * result + result * NEGATIVE_MULTIPLIER + NEGATIVE_OFFSET; hashes[v] = result; } return hashes[v]; } template < std::uint64_t CHILD_MULTIPLIER, std::uint64_t OFFSET, std::uint64_t NEGATIVE_MULTIPLIER, std::uint64_t NEGATIVE_OFFSET, std::uint64_t CIRCULAR_HASH_ELEMENT_MULTIPLIER, std::uint64_t CIRCULAR_HASH_HASH_MULTIPLIER, std::uint64_t CIRCULAR_HASH_OFFSET > std::uint64_t hash_more_than_two_strands() const { // Implementation based on: https://codeforces.com/blog/entry/113465 // Since C++20 is not supported, instead of `optional<int>`, `0` will be used as semantic value for non-existance. std::vector<std::uint64_t> hashes(braid.size()); std::vector<std::uint64_t> top_hashes; // Not tested if faster. top_hashes.reserve(braid.size()); for (int i = 0; i < braid.size(); i++) { if (abs(braid[i]) != strand_count - 1) continue; top_hashes.push_back(hash_of_vertex<CHILD_MULTIPLIER, OFFSET, NEGATIVE_MULTIPLIER, NEGATIVE_OFFSET>(hashes, i)); } return circular_hash<CIRCULAR_HASH_ELEMENT_MULTIPLIER, CIRCULAR_HASH_HASH_MULTIPLIER, CIRCULAR_HASH_OFFSET>(top_hashes); } template < std::uint64_t MULTIPLIER, std::uint64_t POSITIVE, std::uint64_t NEGATIVE, std::uint64_t CIRCULAR_HASH_ELEMENT_MULTIPLIER, std::uint64_t CIRCULAR_HASH_HASH_MULTIPLIER, std::uint64_t CIRCULAR_HASH_OFFSET > std::uint64_t hash_two_strands() const { std::vector<std::uint64_t> hashes(braid.size()); for (int i = 0; i < braid.size(); i++) hashes[i] = braid[i] == 1 ? POSITIVE : NEGATIVE; const std::uint64_t result = circular_hash<CIRCULAR_HASH_ELEMENT_MULTIPLIER, CIRCULAR_HASH_HASH_MULTIPLIER, CIRCULAR_HASH_OFFSET>(hashes); return result; } public: braid_graph( const VEC braid, const int strand_count ) : braid(braid), strand_count(strand_count) { if (strand_count == 2) return; children.resize(braid.size()); // Since C++20 is not supported, instead of `optional<int>`, `-1` will be used as semantic value for non-existance. std::vector<std::vector<int>> last_occurence(strand_count); for (int i = 0; i < braid.size(); i++) { const int cur = abs(braid[i]); // Because sigmas start from 1, this is fine. last_occurence[cur - 1].clear(); last_occurence[cur].push_back(i); } for (int i = 0; i < braid.size(); i++) { const int cur = abs(braid[i]); // Because sigmas start from 1, this is fine. children[i] = last_occurence[cur - 1]; last_occurence[cur - 1].clear(); last_occurence[cur].push_back(i); } } // It is recommended that CHILD_MULTIPLIER be a prime and all template parameters are sufficiently different. template < std::uint64_t MULTIPLIER = 1'000'000'007, std::uint64_t OFFSET = 42, std::uint64_t NEGATIVE_MULTIPLIER = 3'141'592, std::uint64_t NEGATIVE_OFFSET = 2'622'057, // Only used for `strand_count == 2`. std::uint64_t POSITIVE = 2'718'281, std::uint64_t EMPTY_HASH = 1'618'033, std::uint64_t CIRCULAR_HASH_ELEMENT_MULTIPLIER = MULTIPLIER, std::uint64_t CIRCULAR_HASH_HASH_MULTIPLIER = 693'147, std::uint64_t CIRCULAR_HASH_OFFSET = 1'414'213 > std::uint64_t hash() const { if (strand_count == 1) return EMPTY_HASH; if (strand_count == 2) return hash_two_strands< MULTIPLIER, POSITIVE, NEGATIVE_OFFSET, CIRCULAR_HASH_ELEMENT_MULTIPLIER, CIRCULAR_HASH_HASH_MULTIPLIER, CIRCULAR_HASH_OFFSET >(); else return hash_more_than_two_strands< MULTIPLIER, OFFSET, NEGATIVE_MULTIPLIER, NEGATIVE_OFFSET, CIRCULAR_HASH_ELEMENT_MULTIPLIER, CIRCULAR_HASH_HASH_MULTIPLIER, CIRCULAR_HASH_OFFSET >(); } }; std::uint64_t sim_single_hash(const vector<ll> input, const int strand_count) { const braid_graph<vector<ll>> g(input, strand_count+1); return g.hash(); } void solve() { int n, m; cin >> n >> m; vector<ll> a(n), b(n); for (ll &x : a) cin >> x; for (ll &x : b) cin >> x; cout << (sim_single_hash(a, m) == sim_single_hash(b, m) ? "YES\n" : "NO\n"); } int main() { ios::sync_with_stdio(0); cin.tie(0); int t = 1; cin >> t; while (t--) solve(); return 0; }
2120
A
Square of Rectangles
Aryan is an ardent lover of squares but a hater of rectangles (Yes, he knows all squares are rectangles). But Harshith likes to mess with Aryan. Harshith gives Aryan three rectangles of sizes $l_1\times b_1$, $l_2\times b_2$, and $l_3\times b_3$ such that $l_3\leq l_2\leq l_1$ and $b_3\leq b_2\leq b_1$. Aryan, in order to defeat Harshith, decides to arrange these three rectangles to form a square such that no two rectangles overlap and the rectangles are aligned along edges. Rotating rectangles is \textbf{not} allowed. Help Aryan determine if he can defeat Harshith.
There are only two possible ways to arrange rectangles into a square if possible. The only cases possible to arrange rectangles into a square are: All three rectangles are put side by side, i.e. $l_1=l_2=l_3=b_1+b_2+b_3$ or $b_1=b_2=b_3=l_1+l_2+l_3$. Rectangles $2$ and $3$ are side by side with rectangle $1$ above it, i.e. $l_1+l_2=l_1+l_3=b_1=b_2+b_3$ or $b_1+b_2=b_1+b_3=l_1=l_2+l_3$. Check both these cases, and if either is true, output YES, otherwise NO. Complexity is $O(1)$ per test.
[ "geometry", "math" ]
null
#include <iostream> using namespace std; int main(){ int t; cin >> t; int l1, b1, l2, b2, l3, b3; auto check = [&] () { if (l1 == l2 && l2 == l3) return (l1 == b1 + b2 + b3 || (b1 == b2 + b3 && 2*l1 == b1)); if (l2 == l3) return (b2 + b3 == b1 && b1 == l2 + l1); return false; }; while(t--) { cin >> l1 >> b1 >> l2 >> b2 >> l3 >> b3; if(check()) cout << "YES\n"; else { swap(l1, b1); swap(l2, b2); swap(l3, b3); if(check()) cout << "YES\n"; else cout << "NO\n"; } } return 0; }
2120
B
Square Pool
Aryan and Harshith are playing pool in universe AX120 on a fixed square pool table of side $s$ with \textbf{pockets} at its $4$ corners. The corners are situated at $(0,0)$, $(0,s)$, $(s,0)$, and $(s,s)$. In this game variation, $n$ identical balls are placed on the table with integral coordinates such that no ball lies on the edge or corner of the table. Then, they are all simultaneously shot at $10^{100}$ units/sec speed (only at $45$ degrees with the axes). In universe AX120, balls and pockets are almost point-sized, and the collisions are elastic, i.e., the ball, on hitting any surface, bounces off at the same angle and with the same speed. Harshith shot the balls, and he provided Aryan with the balls' positions and the angles at which he shot them. Help Aryan determine the number of balls potted into the \textbf{pockets} by Harshith. It is guaranteed that multiple collisions do not occur at the same moment and position.
What happens to the balls that collide with an edge, eventually? How does a collision between two balls affect the outcome? Observations Any ball on the diagonals of the square that is shot towards a pocket will be potted on a free table. Any ball that collides with an edge will be in a $4$-periodic path colliding with the $4$ edges forever on a free table. The collisions are elastic, so if two balls collide, they exchange their directions. $\bigstar$ Neither collisions affect the effective number of balls traversing towards a pocket. Therefore, the answer is the number of balls initially on the diagonals of the square and shot towards the pockets. Hope you liked the figures; a lot of effort went into them.
[ "geometry" ]
null
#include <iostream> using namespace std; int main() { int t; cin >> t; int n, s, ans = 0, dxi, dyi, xi, yi; while(t--) { cin >> n >> s; for (int i = 0; i < n; i++) { cin >> dxi >> dyi >> xi >> yi; if (dxi == dyi) ans += (xi == yi); else ans += (xi + yi == s); } cout << ans << '\n'; ans = 0; } return 0; }
2120
C
Divine Tree
Harshith attained enlightenment in Competitive Programming by training under a Divine Tree. A divine tree is a rooted tree$^{\text{∗}}$ with $n$ nodes, labelled from $1$ to $n$. The divineness of a node $v$, denoted $d(v)$, is defined as the smallest node label on the unique simple path from the root to node $v$. Aryan, being a hungry Competitive Programmer, asked Harshith to pass on the knowledge. Harshith agreed on the condition that Aryan would be given two positive integers $n$ and $m$, and he had to construct a divine tree with $n$ nodes such that the total divineness of the tree is $m$, i.e., $\displaystyle\sum\limits_{i=1}^n d(i)=m$. If no such tree exists, Aryan must report that it is impossible. Desperate for knowledge, Aryan turned to you for help in completing this task. As a good friend of his, help him solve the task. \begin{footnotesize} $^{\text{∗}}$A tree is a connected graph without cycles. A rooted tree is a tree where one vertex is special and called the root. \end{footnotesize}
What are bounds on $m$ for a given $n$ to have a divine tree? And how does the tree look for the lower bound and the upper bound? The $\text{min}$ and $\text{max}$ value of $m$ for a divine tree to exist are $n$ and $\frac{n \cdot (n + 1)}{2}$ respectively. $\text{POC:}$ Any $m \in [\text{min}, \text{max}]$ can be achieved similar to exhaustive subset sum with redraws enabled. Let $p = m - n$, $\text{cur} = 0$, $\text{ans}$ = [ ]. Now, we need to select a multiset of $n$ non-negative integers that sum to $p$. $\text{Greedy:}$ Loop $j$ from $n - 1$ to $0$, if $\text{cur} + i \le p$: $\text{cur}$ += $i + 1,$ $\text{ans.push_back}(i + 1)$ If $\text{ans.back}()$ is not $1$ add a $1$, why? $\text{Construction:}$ The tree can always be simple path, why? Tree is rooted at $\text{ans}[0]$. Let's store $\text{vis}[0:n - 1] = \text{false}$ to keep track if a node is visited or not. Loop $i$ from $1$ to $\text{size(ans)}$ and add edge between $\text{ans}[i-1]$ to $\text{ans}[i]$ and mark all of them $\text{true}$ in $\text{vis}$. Take all un-visited in the array $\text{unvis}$ and add an edge from $1$ to $\text{unvis}[0]$. Loop $i$ from $1$ to $\text{size(unvis)}$ and add an edge from $\text{unvis}[i-1]$ to $\text{unvis}[i]$. The divine tree which is a simple path looks like $\text{ans}[0] \leftrightarrow$ . . . $\leftrightarrow 1 \leftrightarrow \text{unvis}[0] \leftrightarrow$ . . . $\leftrightarrow \text{unvis.back}()$.
[ "constructive algorithms", "greedy", "math", "sortings", "trees" ]
null
#include <iostream> #include <cstdint> #include <cassert> #include <vector> using namespace std; #define i64 int64_t void solve() { i64 n, sum; cin >> n >> sum; if(sum < n || sum > n * (n + 1) / 2) { cout << "-1\n"; return; } i64 k = sum - n; vector<i64> ans; i64 curr = 0, nsum = 0; for(i64 i = n - 1; i >= 0; --i) { if(curr == k) break; if(curr + i <= k) { curr += i; ans.push_back(i + 1); nsum += i + 1; } } i64 ct = ans.size(); for(i64 i = 0; i < n - ct; ++i) ans.push_back(1); nsum += (n - ct); assert(nsum == sum); if(n == sum) { cout << "1\n"; for(i64 i = 1; i < n; i++) cout << i << ' ' << i + 1 << '\n'; return; } vector<bool> vis(n + 1, 1); cout << ans[0] << '\n'; vis[ans[0]] = 0; for(i64 i = 1; i <= n; i++) { cout << ans[i - 1] << ' ' << ans[i] << '\n'; vis[ans[i - 1]] = 0, vis[ans[i]] = 0; if(ans[i] == 1) { i64 prev = 1; for(i64 j = 2; j <= n; ++j) { if(vis[j]) { cout << prev << ' ' << j << '\n'; prev = j; } } return; } } } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); i64 t; cin >> t; while(t--) solve(); return 0; }
2120
D
Matrix game
Aryan and Harshith play a game. They both start with three integers $a$, $b$, and $k$. Aryan then gives Harshith two integers $n$ and $m$. Harshith then gives Aryan a matrix $X$ with $n$ rows and $m$ columns, such that each of the elements of $X$ is between $1$ and $k$(inclusive). After that, Aryan wins if he can find a submatrix$^{\text{∗}}$ $Y$ of $X$ with $a$ rows and $b$ columns such that all elements of $Y$ are equal. For example, when $a=2, b=2, k=6, n=3$ and $m=3$, if Harshith gives Aryan the matrix below, it is a win for Aryan as it has a submatrix of size $2\times 2$ with all elements equal to $1$ as shown below. \begin{center} Example of a matrix where Aryan wins \end{center} Aryan gives you the values of $a$, $b$, and $k$. He asks you to find the lexicographically minimum tuple $(n,m)$ that he should give to Harshith such that Aryan always wins. Help Aryan win the game. Assume that Harshith plays optimally. The values of $n$ and $m$ can be large, so output them modulo $10^9+7$. A tuple $(n_1, m_1)$ is said to be lexicographically smaller than $(n_2, m_2)$ if either $n_1<n_2$ or $n_1=n_2$ and $m_1<m_2$. \begin{footnotesize} $^{\text{∗}}$A submatrix of a matrix is obtained by removing some rows and/or columns from the original matrix. \end{footnotesize}
Use pigeonhole principle to get minimum $n$ and then minimum $m$. If each row is of size $k(a-1)+1$, by pigeonhole principle, it will have at least $a$ elements with the same value. Let those elements appear positions $p_1<p_2<...<p_a$($p_i$ is the column number where the element occurs) and let the value be $v$. Consider the tuple $(v, p_1,p_2,...,p_a)$. If the same tuple appears $b$ times, we are done as we obtain an $a*b$ submatrix with all elements of same value. The number of possible values of the tuple is $k\times^nC_a$. So, there should be atleast $(b-1)k\times^nC_a+1$ rows for there to be $b$ repetitions by pigeonhole principle.
[ "combinatorics", "math" ]
null
#include <bits/stdc++.h> using namespace std; const long long mod=1000000007; long long inv[100001]; int main(){ ios::sync_with_stdio(false),cin.tie(0); int T; long long i,a,b,k,d,ans; inv[1]=1; for(i=2;i<=100000;i++)inv[i]=(mod-mod/i)*inv[mod%i]%mod; for(cin>>T;T>0;T--) { cin>>a>>b>>k; d=k*a-k+1; ans=k; for(i=1;i<=a;i++)ans=ans*((d-i+1)%mod)%mod*inv[i]%mod; cout<<d%mod<<' '<<(ans*b-ans+1+mod)%mod<<'\n'; } return 0; }
2120
E
Lanes of Cars
Harshith is the president of TollClub. He tasks his subordinate Aryan to oversee a toll plaza with $n$ lanes. Initially, the $i$-th lane has $a_i$ cars waiting in a queue. Exactly one car from the front of each lane passes through the toll every second. The angriness of a car is defined as the number of seconds it had to wait before passing through the toll. Consider it takes 1 sec for each car to pass the toll, i.e., the first car in a lane has angriness $1$, the second car has angriness $2$, and so on. To reduce congestion and frustration, cars are allowed to switch lanes. A car can instantly move to the back of any other lane at any time. However, changing lanes increases its angriness by an additional $k$ units due to the confusion caused by the lane change. Harshith, being the awesome person he is, wants to help the drivers by minimising the total angriness of all cars. He asks Aryan to do so or get fired. Aryan is allowed to change lanes of any car anytime (possibly zero), but his goal is to find the minimum possible total angriness if the lane changes are done optimally. Help Aryan retain his job by determining the minimum angriness he can achieve.
Use binary search to find minimum number of cars in a lane after optimal number lane shifts. Adjust cars afterwards such that minimum number of cars remains same as found in binary search, but angriness is minimized. Observe the following(Let $1$ car shift lanes at a time): It is always optimal to shift the car at the back of a lane before shifting cars in front of it. The optimal condition is that in every iteration, the car shifts from the back of the lane with max cars to the back of the lane with minimum cars. If $(\text{cars in the max lane} - \text{cars in min lane}) <= K$, then it is not optimal to shift any cars as it will only increase the angriness. It doesn't matter when a car switches lane; it can switch at any time and the optimal answer remains the same. For a value $v$, let $def(v)$ be number of cars required such that each lane has atleast $v$ cars, i.e. $def(v)=\sum_{i=1}^N max(0, v-a[i]))$ $defs(v)$ be number of lanes with less than $v$ cars initially, i.e. $defs(v)=\sum_{i=1}^N (v-a[i]>0)$. $exc(v)$ be number of cars to remove such that each lanes has atmost $v$ cars, i.e. $exc(v)=\sum_{i=1}^N max(0, a[i]-v)$ $excs(v)$ be number of lanes with more than $v$ cars initially, i.e. $excs=\sum_{i=1}^N (a[i]-v>0)$ Using binary search, find the maximum value of $v$ for which $exc(v+k)>def(v)$. This $v$ denotes the minimum value that the array will have after cars have switched lanes optimally, and the maximum value of the array will be $v+k$ if $exc(v+k)=def(v)$ and $v+k+1$ if $exc(v+k)>def(v)$. Final sorted array $A'$ after optimal lane switches will have values between $v+1$ and $v+k-1$ remain the same as array $A$. Of the first $defs(v)$ elements, last $max(0, exc(v)-def(v)-excs(v))$ values will be $v+1$ and remaining values will be $v$. Of the last $excs(v)$ elements, last $min(excs(v), exc(v)-def(v))$ values will be $v+k+1$ and remaining elements will be $v+k$. Find minimum angriness after optimal lane switches using array $A'$ and calculating number of cars that have switched lanes. Time complexity is $O(n \log \max(A_i))$ Special thanks to picramide for the initial problem idea, which I misheard and it turned into this problem.
[ "binary search", "dp", "ternary search" ]
null
//�� #include<bits/stdc++.h> using namespace std; typedef long long LL; typedef double DB; const int N = 1111111; const LL inf = 1e18; int n,k,a[N]; LL s[N]; int main(){ int T,i,l,r,h; LL x,y,z,o,p,t; scanf("%d",&T); while(T--){ scanf("%d%d",&n,&k); for(i=1;i<=n;i++) scanf("%d",a+i); sort(a+1,a+n+1); for(i=1;i<=n;i++) s[i]=s[i-1]+a[i]; z=inf,o=-1; l=0,r=N; while(l<=r){ h=l+r>>1; i=lower_bound(a+1,a+n+1,h)-a-1; x=(LL)i*h-s[i]; i=lower_bound(a+1,a+n+1,h+k)-a-1; y=(s[n]-s[i])-(LL)(n-i)*(h+k); if(z>max(x,y)) z=max(x,y),o=h; if(x<y) l=h+1; else r=h-1; } //cout<<z<<' '<<o<<endl; p=z*k; t=0; for(i=1;i<=n;i++){ x=min(max((LL)a[i],o),o+k); p+=(LL)x*(x+1)/2; t+=x-a[i]; } if(t>0) p-=(o+k)*t; if(t<0) p+=(o+1)*-t; printf("%lld\n",p); } return 0; }
2120
F
Superb Graphs
As we all know, Aryan is a funny guy. He decides to create fun graphs. For a graph $G$, he defines fun graph $G'$ of $G$ as follows: - Every vertex $v'$ of $G'$ maps to a non-empty independent set$^{\text{∗}}$ or clique$^{\text{†}}$ in $G$. - The sets of vertices of $G$ that the vertices of $G'$ map to are pairwise disjoint and combined cover all the vertices of $G$, i.e., the sets of vertices of $G$ mapped by vertices of $G'$ form a partition of the vertex set of $G$. - If an edge connects two vertices $v_1'$ and $v_2'$ in $G'$, then there is an edge between every vertex of $G$ in the set mapped to $v_1'$ and every vertex of $G$ in the set mapped to $v_2'$. - If an edge does not connect two vertices $v_1'$ and $v_2'$ in $G'$, then there is not an edge between any vertex of $G$ in the set mapped to $v_1'$ and any vertex of $G$ in the set mapped to $v_2'$. As we all know again, Harshith is a superb guy. He decides to use fun graphs to create his own superb graphs. For a graph $G$, a fun graph $G' '$ is called a superb graph of $G$ if $G' '$ has the minimum number of vertices among all possible fun graphs of $G$. Aryan gives Harshith $k$ simple undirected graphs$^{\text{‡}}$ $G_1, G_2,\ldots,G_k$, all on the same vertex set $V$. Harshith then wonders if there exist $k$ other graphs $H_1, H_2,\ldots,H_k$, all on some other vertex set $V'$ such that: - $G_i$ is a superb graph of $H_i$ for all $i\in \{1,2,\ldots,k\}$. - If a vertex $v\in V$ maps to an independent set of size greater than $1$ in one $G_i, H_i$ ($1\leq i\leq k$) pair, then there exists no pair $G_j, H_j$ ($1\leq j\leq k, j\neq i$) where $v$ maps to a clique of size greater than $1$. Help Harshith solve his wonder. \begin{footnotesize} $^{\text{∗}}$For a graph $G$, a subset $S$ of vertices is called an independent set if no two vertices of $S$ are connected with an edge. $^{\text{†}}$For a graph $G$, a subset $S$ of vertices is called a clique if every vertex of $S$ is connected to every other vertex of $S$ with an edge. $^{\text{‡}}$A graph is a simple undirected graph if its edges are undirected and there are no self-loops or multiple edges between the same pair of vertices. \end{footnotesize}
If two vertices have same open neighborhood in some graph $G_i$, atleast one of them has to correpond to a clique in every $G_i, H_i$ pair. If two vertices have same closed neighborhood in some graph $G_i$, atleast one of them has to correpond to an independent set in every $G_i, H_i$ pair. In a graph $G$, two vertices $v_1$ and $v_2$ are said to be of type1 if they are not adjacent and have the same open neighborhood, i.e., $v_1v_2 \not\in E(G)$ and $N_G(v_1)=N_G(v_2)$. In a graph $G$, two vertices $v_1$ and $v_2$ are said to be of type2 if they are adjacent and have the same closed neighborhood, i.e., $v_1v_2 \in E(G)$ and $N_G[v_1]=N_G[v_2]$. An equivalence class is defined as the set of vertices of the same type. Here, note that if two vertices of type1 are present, then both vertices can't correspond to independent sets as we can merge them to a larger independent set and denote it by a single vertex. Similarly, if two vertices of type2 are present, then both can't correspond to cliques as we can merge them to a larger clique and denote it by a single vertex. Proof: Graph is superb $\rightarrow$ Every type1 pair has atleast one vertex correspond to a clique and every type2 pair has atleast one vertex correspond to an independent set(We'll prove contrapositive). Assume there exists a type1 pair $v_1v_2$ that has both vertices corresponding to an independent set. Let their vertex set be $V_1$ and $V_2$. Consider the set $V=V_1 \cup V_2$ and a new vertex $v$ that has the same neighborhood as $v_1(or v_2)$. Each vertex in the set $V$ satisfies all of the $4$ conditions given, as if vertices in set $V_1$ were connected to a vertex, so is every vertex in the set $V_2$ and vice versa. Hence, we can merge both $v_1$ and $v_2$ into a single vertex $v$ and still satisfy the conditions, making the graph not superb as it doesn't have minimum order. Same reasoning goes for a type2 pair. Every type1 pair has atleast one vertex correspond to a clique and every type2 pair has atleast one vertex correspond to an independent set $\rightarrow$ Graph is superb. Consider a graph $G({v_i}, E)$. Let its fun graph be $G'({V_i}, E')$. Observe that if two vertices $V_i$ and $V_j$ have different neighborhoods (excluding each other), then we can't have any fun graph $G2'$ in which a vertex $K$ contains some non-zero vertices of $V_i$ and some non-zero vertices of $V_j$. This is because if that happens, then it means all vertices in set $K$ have the same neighborhood, excluding each other, meaning that $V_i$ and $V_j$ have the same neighborhood, excluding each other, a contradiction. For the same reasons, if two vertices have the same neighborhood, excluding each other, they have to be either a type1 or type2 pair for there to exist another fun graph $G2'$ in which a vertex $K$ contains some non-zero vertices from both sets. Suppose every type1 pair has at least one vertex corresponding to a clique, and every type2 pair has at least one vertex corresponding to an independent set. In that case, there can't be any fun graph $G2'$ in which a vertex K contains some non-zero vertices from two sets of $G'$ as set $K$ has either all vertices in it connected or none of them connected. So, in $G2'$, each vertex $K$ corresponds to a subset of vertices of $G'$, implying that it has at least as many vertices as $G'$. So, $G'$ is of minimum cardinality and so is superb. \end{enumerate} So, for this problem, a vertex is assigned $0$ if it is assigned an independent set and a vertex is assigned $1$ if it is assigned a clique. Consider two vertices $a$ and $b$. If both are type1, then we can denote it by $a \lor b$(both can't be I.S.) and if both are type2, we can denote it by $\lnot a \lor \lnot b$(Both can't be cliques). We can make a $2$-sat equation using this. If the $2$-sat has a solution, then the graph is a superb graph. Else, it is not. Expected complexity is $O(n^3)$ When is a graph a superb graph of itself?
[ "2-sat", "graphs" ]
null
#include <bits/stdc++.h> using namespace std; #define endl '\n' #define int long long bool twoSAT(vector<vector<int>> &adj, vector<vector<int>> &adj_rev, vector<bool> &assignment) { int n = adj.size(); vector<int> order; vector<bool> used(n, false); function<void(int)> dfs1 = [&](int v) { used[v] = true; for(auto u : adj[v]) if(!used[u]) dfs1(u); order.push_back(v); }; vector<int> comp(n, -1); function<void(int, int)> dfs2 = [&](int v, int color) { comp[v] = color; for(auto u : adj_rev[v]) if(comp[u] == -1) dfs2(u, color); }; for(int i = 0; i < n; ++i) if(!used[i]) dfs1(i); used.assign(n, false); for(int i = 0, j = 0; i < n; ++i) { int v = order[n - i - 1]; if(comp[v] == -1) dfs2(v, j++); } assignment.assign(n / 2, false); for(int i = 0; i < n; i += 2) { if(comp[i] == comp[i + 1]) return false; assignment[i / 2] = comp[i] > comp[i + 1]; } return true; } void solve() { int n, k; cin >> n >> k; vector<vector<int>> adj(2*n), adj_rev(2*n); auto add_or = [&](int a, bool nega, int b, bool negb) { a = (2*a) + nega; b = (2*b) + negb; adj[a^1].push_back(b); adj[b^1].push_back(a); adj_rev[b].push_back(a^1); adj_rev[a].push_back(b^1); }; function<void(int, vector<vector<int>>&)> process = [&n, &add_or](int m, vector<vector<int>> &adj) { map<vector<int>, vector<int>> mp; for(int i = 0; i < n; ++i) { auto temp = adj[i]; temp.push_back(i); sort(temp.begin(), temp.end()); mp[temp].push_back(i); } for(auto [x, y] : mp) { for(int i = 0; i < y.size(); ++i) { for(int j = i + 1; j < y.size(); ++j) { add_or(y[i], 1, y[j], 1); } } } mp.clear(); for(int i = 0; i < n; ++i) { sort(adj[i].begin(), adj[i].end()); mp[adj[i]].push_back(i); } for(auto [x, y] : mp) { for(int i = 0; i < y.size(); ++i) { for(int j = i + 1; j < y.size(); ++j) { add_or(y[i], 0, y[j], 0); } } } }; for(int i = 0; i < k; ++i) { int m; cin >> m; vector<vector<int>> adj(n); for(int j = 0; j < m; ++j) { int x, y; cin >> x >> y; x--; y--; adj[x].push_back(y); adj[y].push_back(x); } process(m, adj); } vector<bool> assignment(n, false); cout << ((twoSAT(adj, adj_rev, assignment)) ? "Yes" : "No"); } int32_t main() { ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); int _TC = 0; cin >> _TC; for(int _ct = 1; _ct <= _TC; ++_ct) { solve(); cout << endl; } }
2120
G
Eulerian Line Graph
Aryan loves graph theory more than anything. Well, no, he likes to flex his research paper on line graphs to everyone more. To start a conversation with you, he decides to give you a problem on line graphs. In the mathematical discipline of graph theory, the line graph of a simple undirected graph $G$ is another simple undirected graph $L(G)$ that represents the adjacency between every two edges in $G$. Precisely speaking, for an undirected graph $G$ without self-loops or multiple edges, its line graph $L(G)$ is a graph such that - Each vertex of $L(G)$ represents an edge of $G$. - Two vertices of $L(G)$ are adjacent if and only if their corresponding edges share a common endpoint in $G$. Also, $L^0(G)=G$ and $L^k(G)=L(L^{k-1}(G))$ for $k\geq 1$. An Euler trail is a sequence of edges that visits every edge of the graph exactly once. This trail can be either a path (starting and ending at different vertices) or a cycle (starting and ending at the same vertex). Vertices may be revisited during the trail, but each edge must be used exactly once. Aryan gives you a simple connected graph $G$ with $n$ vertices and $m$ edges and an integer $k$, and it is guaranteed that $G$ has an Euler trail and it is not a path graph$^{\text{∗}}$. He asks you to determine if $L^k(G)$ has an Euler trail. \begin{footnotesize} $^{\text{∗}}$A path graph is a tree where every vertex is connected to atmost two other vertices. \end{footnotesize}
If $G$ has an Euler cycle, what can we say about $L^k(G), k\geq1$ If $L(G)$ has Euler path, how to determine if $L^k(G), k\geq2$ has an Euler path? If $L(G)$ doesn't Euler tour, but $L^2(G)$ does, what can we say about structure of $G$? Can there exist another graph $H$ such that $L(H)=G$? Following are the cases where Euler tour is possible: For the case of Euler cycle- If $G$ has an Euler cycle, then $L(G)$ always has an Euler cycle. If $G$ has an Euler cycle, then $L(G)$ always has an Euler cycle. If after removing the two odd-degree vertices of $G$, the remaining graph is fully disconnected(i.e. the odd-degree vertices form a vertex cover), then $L^2(G)$ has an Euler cycle. If after removing the two odd-degree vertices of $G$, the remaining graph is fully disconnected(i.e. the odd-degree vertices form a vertex cover), then $L^2(G)$ has an Euler cycle. For the case of Euler path- If $G$ doesn't have an Euler tour but $L(G)$ does, then there is no graph $H$ such that $G=L^2(H)$. So, we only need to consider cases where $G$ has an Euler tour, $L(G)$ doesn't, but $L^2(G)$ does and where both $G$ and $L(G)$ have Euler tour. If $G$ doesn't have an Euler tour but $L(G)$ does, then there is no graph $H$ such that $G=L^2(H)$. So, we only need to consider cases where $G$ has an Euler tour, $L(G)$ doesn't, but $L^2(G)$ does and where both $G$ and $L(G)$ have Euler tour. If $G$ has an Euler path, $L(G)$ doesn't, but $L^2(G)$ does, then $G$ has exactly two odd-degree vertices $v_1$ and $v_2$. Additionally, graph obtained after removing $v_1$ and $v_2$ from $G$(i.e. $G\backslash${$v_1,v_2$}) consists of exactly one connected component with more than one vertex, and the rest are isolated vertices. Additionally, each connected component/isolated vertex in $G\backslash${$v_1,v_2$} has exactly two edges connecting them to $v_1$, $v_2$ in $G$. Here, $L^k(G), k\geq3$ won't have Euler tour. If $G$ has an Euler path, $L(G)$ doesn't, but $L^2(G)$ does, then $G$ has exactly two odd-degree vertices $v_1$ and $v_2$. Additionally, graph obtained after removing $v_1$ and $v_2$ from $G$(i.e. $G\backslash${$v_1,v_2$}) consists of exactly one connected component with more than one vertex, and the rest are isolated vertices. Additionally, each connected component/isolated vertex in $G\backslash${$v_1,v_2$} has exactly two edges connecting them to $v_1$, $v_2$ in $G$. Here, $L^k(G), k\geq3$ won't have Euler tour. If $G$ has an Euler tour and $L(G)$ also has an Euler tour, find the smallest trailing path of $G$(trailing path means a path where the degree of all vertices is either $1$ or $2$. There will be only $2$ of them as $G$ has an Euler tour) and return its length, and that will be the answer as the length of a trailing path decreases by $1$ from $G$ to $L(G)$. If $G$ has an Euler tour and $L(G)$ also has an Euler tour, find the smallest trailing path of $G$(trailing path means a path where the degree of all vertices is either $1$ or $2$. There will be only $2$ of them as $G$ has an Euler tour) and return its length, and that will be the answer as the length of a trailing path decreases by $1$ from $G$ to $L(G)$. All of this can be checked in initial graph itself in $O(n+m)$ time. The removed problem was supposed to be G in the Div. 1 + Div. 2, the same problem exists and is available here. Let $s_i$ be the maximum number such that array $[a_1, a_2, ...a_i-1, s_i, a_i+1, .., a_n]$ is valid. Lemma 1: There exists at most one index $i$ such that $a_i \ge s_i$. Proof 1: Assume there exist two indices $i$ and $j$ such that $a_i \ge s_i$ and $a_j \ge s_j$, say $i \lt j$. $a_j \ge s_j >= j-1 + a_1 + a_2 + ... + a_{j-1} \Rightarrow a_j \ge j-1 + \sum_{k = 1}^{j-1} a_k \Lleftarrow \text{Eq}_1$ For an index $i \lt j$, $a_i \ge s_i$ when $a_j \ge s_j \Rightarrow$ $a[i] \ge a[j] - (j-i) + 1 \Lleftarrow \text{Eq}_2$ $\text{Eq}_1$ and $\text{Eq}_2 \Rightarrow$ $a_j + a_i \ge j-1 - (j-i) + 1 + a_j + \sum_{k = 1}^{j-1} a_k \Rightarrow a_i \ge i + \sum_{k = 1}^{j-1} a_k$ since $i \lt j \Rightarrow$ $a_i \ge i + a_1 + a_2 + .... + a_i + ... + a_{j-1} \Rightarrow 0 \ge i + a_1 + ... + a_{i-1} + .... + a_j$ $\text{Contradiction!!}$ Lemma 2: If $a_i \le s_i$ $\forall$ $i \in [1, n]$, $a$ is valid. Proof 2: Backwards induction - let's divide it into two cases. Case 1. $a_i \lt s_i$ $\forall$ $i \in [1, n]$. Let $j \gt 0$ be the smallest index such that $a_j > 0$ and say $j$-th index car overtakes the one ahead $i.e.,$ $0 ... a_{j-1}, a_j, a_{j+1} ... a_n \Rightarrow 0 ... a_j-1, a_{j-1}, a_{j+1} ... a_n$ $0 ... s_{j-1}, s_j, s_{j+1} ... s_n \Rightarrow 0 ... s_j-1, \ge s_{j-1} + a_j, s_{j+1} ... s_n$ For all $i \lt j-1$ and $i \gt j$, $a_i \lt s_i$ holds as they were unaffected, why? At index $j-1$ since $a_j \lt s_j \Rightarrow a[j]-1 \lt s[j]-1$ holds. At index $j$ since $a_{j-1} < s_{j-1} \Rightarrow a_{j-1} < s_{j-1} + a_j$ holds. Case 2. Now, for an index $j$ if $a_j = s_j \Rightarrow$ we choose the index $k \gt j$ such that $a_k \gt 0$; if there's no such $k$, we choose $j$. Condition holds similarly, why? $\text{Solution}$ Let $p = [0, a_1, a_1 + a_2, a_1 + a_2 + a_3, .... ]$ and say an index $i$ is critical $\Leftrightarrow$ $i + p[i] < a_i$. Lemma 3: It is sufficient to check the greatest critical index $i.e.,$ if $i_1, i_2, ..., i_k$ are critical, $check(a, i_k)$ would be enough to determine whether $a$ is valid or not. Proof 3: Trivial, if $i_k$ performs $a_{i_k}$ overtakes successfully $\Rightarrow$ overtakes at indices $i < i_k$ will be nullified by $a_{i_k}$ Special thanks to Proelectro444 for the formal proof. $\looparrowright$ Alternate Solution: $O(n \cdot log n)$ data structure optimized $check(a, i_k)$ $\forall$ $k \in [1, n]$. If $a$ is valid, also determine the index of the first overtaker, if multiple are possible, return any one of them.
[ "graphs", "greedy", "math" ]
null
#include "bits/stdc++.h" using namespace std; using ll = long long int; mt19937_64 RNG(chrono::high_resolution_clock::now().time_since_epoch().count()); #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> using namespace __gnu_pbds; template<class T> using Tree = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>; struct FT { vector<ll> s; FT(int n) : s(n) {} void update(int pos, ll dif) { // a[pos] += dif↵ for (; pos < size(s); pos |= pos + 1) s[pos] += dif; } ll query(int pos) { // sum of values in [0, pos) ll res = 0; for (; pos > 0; pos &= pos - 1) res += s[pos-1]; return res; } }; int main() { ios::sync_with_stdio(false); cin.tie(0); map<vector<int>, bool> cache; auto brute = [&] (const auto &self, auto v) -> bool { if (ranges::max(v) == 0) return true; if (cache.find(v) != cache.end()) return cache[v]; for (int i = 1; i < v.size(); ++i) { if (v[i] > 0) { auto w = v; --w[i]; swap(w[i], w[i-1]); bool res = self(self, w); if (res) return cache[v] = true; } } return cache[v] = false; }; auto solve = [&] (auto v) { int n = size(v); vector<int> b(n); Tree<array<int, 2>> cur; for (int i = 0; i < n; ++i) { if (v[i] == 0) b[i] = i; else { // v[i]-th largest element↵ if (cur.size() >= v[i]) { int want = cur.size() - v[i]; auto [val, _] = *cur.find_by_order(want); b[i] = val; } else b[i] = -1; } cur.insert({b[i], i}); } vector deactivate(n+1, vector<int>()); for (int i = 0; i < n; ++i) if (b[i] >= 0) deactivate[b[i]].push_back(i); ll pref = 0; for (int i = 0; i < n; ++i) pref += v[i] + 1; FT fen(n); ll suf = 0; for (int i = n-1; i >= 0; --i) { pref -= v[i] + 1; ranges::reverse(deactivate[i]); for (int x : deactivate[i]) { if (v[x]) { int sub = fen.query(x+1) - fen.query(i+1); sub = x - i - sub; suf -= v[x] - sub; fen.update(x, -1); } suf -= fen.query(n) - fen.query(x); } ll have = pref + suf; if (v[i] > have) return false; if (v[i] > 0) { suf += v[i]; fen.update(i, 1); } } return true; }; int t; cin >> t; while (t--) { int n; cin >> n; vector a(n, 0); for (int &x : a) cin >> x; if (solve(a)) cout << "Yes\n"; else cout << "No\n"; } }
2121
A
Letter Home
You are given an array of distinct integers $x_1, x_2, \ldots, x_n$ and an integer $s$. Initially, you are at position $pos = s$ on the $X$ axis. In one step, you can perform exactly one of the following two actions: - Move from position $pos$ to position $pos + 1$. - Move from position $pos$ to position $pos - 1$. A sequence of steps will be considered successful if, during the entire journey, you visit each position $x_i$ on the $X$ axis at least once. Note that the initial position $pos = s$ is also considered visited. Your task is to determine the minimum number of steps in any successful sequence of steps.
Notice that if we visit positions $x_1$ and $x_n$, we will necessarily visit all other positions $x_i$. Therefore, our goal will be to visit positions $x_1$ and $x_n$. We then have two options: From position $s$, go to position $x_1$, and from there go to position $x_n$. We will need to make $|s - x_1| + |x_n - x_1|$ steps. From position $s$, go to position $x_n$, and from there go to position $x_1$. We will need to make $|s - x_n| + |x_n - x_1|$ steps. From the two cases, we choose the smaller one. The answer will be $\min(|s - x_1|, |s - x_n|) + x_n - x_1$. The solution can be implemented in $O(n)$.
[ "brute force", "math" ]
null
#include<bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n, s; cin >> n >> s; vector<int> x(n); for (int i = 0; i < n; i++) cin >> x[i]; int ans = min(abs(s - x[0]), abs(s - x.back())) + x.back() - x[0]; cout << ans << '\n'; } return 0; }
2121
B
Above the Clouds
You are given a string $s$ of length $n$, consisting of lowercase letters of the Latin alphabet. Determine whether there exist three \textbf{non-empty} strings $a$, $b$, and $c$ such that: - $a + b + c = s$, meaning the concatenation$^{\text{∗}}$ of strings $a$, $b$, and $c$ equals $s$. - The string $b$ is a substring$^{\text{†}}$ of the string $a + c$, which is the concatenation of strings $a$ and $c$. \begin{footnotesize} $^{\text{∗}}$Concatenation of strings $a$ and $b$ is defined as the string $a + b = a_1a_2 \ldots a_pb_1b_2 \ldots b_q$, where $p$ and $q$ are the lengths of strings $a$ and $b$, respectively. For example, the concatenation of the strings "code" and "forces" is "codeforces". $^{\text{†}}$A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end. \end{footnotesize}
Notice that if there exist three non-empty strings $a$, $b$, and $c$ that satisfy the condition, then there exist three non-empty strings $a'$, $b'$, and $c'$ that satisfy the condition, where the length of string $b'$ is equal to $1$. To achieve this, one can choose any character from string $b$, append all characters before it to the end of string $a$, and prepend all characters after it to the beginning of string $c$. Let $cnt_l$ be the number of times character $l$ appears in string $s$. Then the answer "Yes" will be given if any of the following conditions hold: There exists a character $l$ such that $cnt_l \geq 3$. We can choose the second occurrence of character $l$ in string $s$ as string $b$. There exists a character $l$ such that $cnt_l = 2$ and either the first or the last character of string $s$ is not equal to $l$. We can choose any occurrence of character $l$ in the string as string $b$, except when it is the first or last character of string $s$. The solution can be implemented in $O(n)$.
[ "constructive algorithms", "greedy", "strings" ]
null
#include<bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; string s; cin >> s; vector<int> cnt(26, 0); for (auto c : s) cnt[c - 'a']++; int flag = 0; for (int i = 0; i < 26; i++) { if (cnt[i] >= 3) flag = 1; else if (cnt[i] == 2 && (s[0] - 'a' != i || s.back() - 'a' != i)) flag = 1; } if (flag) cout << "Yes" << '\n'; else cout << "No" << '\n'; } return 0; }
2121
C
Those Who Are With Us
You are given a matrix of integers with $n$ rows and $m$ columns. The cell at the intersection of the $i$-th row and the $j$-th column contains the number $a_{ij}$. You can perform the following operation \textbf{exactly once}: - Choose two numbers $1 \leq r \leq n$ and $1 \leq c \leq m$. - For all cells $(i, j)$ in the matrix such that $i = r$ or $j = c$, decrease $a_{ij}$ by one. You need to find the minimal possible maximum value in the matrix $a$ after performing exactly one such operation.
Let $mx$ be the maximum value in the matrix. Note that the answer to the problem will be either $mx - 1$ or $mx$. When will the answer be $mx - 1$? If there exists a pair $(r, c)$ such that all values of $mx$ are contained in row $r$ or column $c$. Let $row_r$ be the number of times $mx$ appears in row $r$; $col_c$ be the number of times $mx$ appears in column $c$. Then in row $r$ or column $c$, the number $mx$ appears $row_r + col_c$ times. However, there is an edge case: if $a_{rc} = mx$, we have counted it twice, so we need to subtract one. If this count equals the total occurrences of $mx$ in the matrix, then we can achieve the answer $mx - 1$. The solution can be implemented in $O(n \cdot m)$.
[ "greedy", "implementation" ]
null
#include<bits/stdc++.h> using namespace std; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n, m; cin >> n >> m; vector<vector<int>> a(n, vector<int> (m)); int mx = 0, cnt_mx = 0; for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { cin >> a[i][j]; if (a[i][j] > mx) { mx = a[i][j], cnt_mx = 1; } else if (a[i][j] == mx) { cnt_mx++; } } } vector<int> r(n), c(m); for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { if (a[i][j] == mx) { r[i]++; c[j]++; } } } int flag = 0; for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { if (r[i] + c[j] - (a[i][j] == mx) == cnt_mx) { flag = 1; } } } cout << mx - flag << '\n'; } return 0; }
2121
D
1709
You are given two arrays of integers $a_1, a_2, \ldots, a_n$ and $b_1, b_2, \ldots, b_n$. It is guaranteed that each integer from $1$ to $2 \cdot n$ appears in exactly one of the arrays. You need to perform a certain number of operations (possibly zero) so that \textbf{both} of the following conditions are satisfied: - For each $1 \leq i < n$, it holds that $a_i < a_{i + 1}$ and $b_i < b_{i + 1}$. - For each $1 \leq i \leq n$, it holds that $a_i < b_i$. During each operation, you can perform exactly one of the following three actions: - Choose an index $1 \leq i < n$ and swap the values $a_i$ and $a_{i + 1}$. - Choose an index $1 \leq i < n$ and swap the values $b_i$ and $b_{i + 1}$. - Choose an index $1 \leq i \leq n$ and swap the values $a_i$ and $b_i$. You do not need to minimize the number of operations, but the total number must not exceed $1709$. Find any sequence of operations that satisfies \textbf{both} conditions.
Using bubble sort, we will sort array $a$: while there exists an index $1 \leq i < n$ such that $a_i > a_{i + 1}$, we will swap them. Similarly, we will sort array $b$. In this part of the task, we will perform no more than $\frac{n \cdot (n - 1)}{2} + \frac{n \cdot (n - 1)}{2} = n \cdot (n - 1)$ operations, since the number of swaps in bubble sort is equal to the number of inversions in the array. Next, for all $1 \leq i \leq n$ such that $a_i > b_i$, we will swap $a_i$ and $b_i$. After this, we will satisfy both conditions and perform a total of no more than $n \cdot (n - 1) + n = n^2$ operations. The solution can be implemented in $O(n^2)$.
[ "implementation", "sortings" ]
null
#include<bits/stdc++.h> using namespace std; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<int> a(n), b(n); for (int i = 0; i < n; i++) cin >> a[i]; for (int i = 0; i < n; i++) cin >> b[i]; vector<pair<int, int>> ans; for (int i = 0; i < n; i++) { for (int j = 1; j < n; j++) { if (a[j - 1] > a[j]) { swap(a[j - 1], a[j]); ans.push_back({1, j}); } } } for (int i = 0; i < n; i++) { for (int j = 1; j < n; j++) { if (b[j - 1] > b[j]) { swap(b[j - 1], b[j]); ans.push_back({2, j}); } } } for (int i = 0; i < n; i++) { if (a[i] > b[i]) { ans.push_back({3, i + 1}); } } cout << ans.size() << '\n'; for (auto [x, y] : ans) cout << x << " " << y << '\n'; } return 0; }
2121
E
Sponsor of Your Problems
For two integers $a$ and $b$, we define $f(a, b)$ as the number of positions in the decimal representation of the numbers $a$ and $b$ where their digits are the same. For example, $f(12, 21) = 0$, $f(31, 37) = 1$, $f(19891, 18981) = 2$, $f(54321, 24361) = 3$. You are given two integers $l$ and $r$ of the \textbf{same} length in decimal representation. Consider all integers $l \leq x \leq r$. Your task is to find the minimum value of $f(l, x) + f(x, r)$.
Note that the number $x$ will have the same length in decimal representation as the numbers $l$ and $r$. Consider the example $l = 12345$ and $r = 12534$. For the number $x$ to be in the range between $l$ and $r$ inclusive, it must be of length $5$ and start with the digits $12$. In other words, the greatest common prefix of the numbers $l$ and $r$ will be the beginning of the number $x$. Next, look at the digits following the greatest common prefix, which in this case are $3$ and $5$. If these digits differ by at least two, we can choose a digit between them, excluding the boundaries, and then fill in the remaining lower digits such that there are no common digits with the numbers $l$ and $r$. In this case, for example, we can take the number $x = 12400$: we chose the digit $4$ between $3$ and $5$, and then we can choose the lower two digits arbitrarily-let's choose them so that there are no common digits with the numbers $l$ and $r$. In summary, if the digits after the greatest common prefix differ by at least two, the answer will be equal to twice the common prefix. Now consider the example $l = 1239990$ and $r = 1240037$. Again, we can conclude that the number $x$ will be of length $7$ and start with the digits $12$. But in this case, the digits after the greatest common prefix differ by $1$, which means we need to choose one of them. Let's consider two options for selection: The number $x$ will be of length $7$ and start with the digits $123$. Then this number will definitely be $< r$. We just need to ensure that this number is not less than $l$ and minimize the number of common digits. Notice that the next digit in $l$ is $9$, so we can guarantee that the number $x$ must start with the digits $123999$. The number $x$ will be of length $7$ and start with the digits $124$. Then this number will definitely be $> l$. We just need to ensure that this number is not greater than $r$ and minimize the number of common digits. Notice that the next digit in $r$ is $0$, so we can guarantee that the number $x$ must start with the digits $12400$. If we skip the complete breakdown of cases, the answer will consist of three parts that need to be summed: Twice the common prefix. One (since we are choosing one of the digits after the greatest common prefix, and it will be in one of the numbers). The number of consecutive digits $i$ after the first differing digit in the numbers $l$ and $r$, such that $l_i = 9$ and $r_i = 0$. Thus, in this case, the answer is $2 + 1 + 2 = 5$. We can choose the number $x = 1240001$, for example.
[ "dp", "greedy", "implementation", "strings" ]
null
#include<bits/stdc++.h> using namespace std; int main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { string l, r; cin >> l >> r; if (l == r) { cout << 2 * l.size() << '\n'; continue; } int ptr = 0; while (ptr < l.size() && l[ptr] == r[ptr]) ptr++; if (l[ptr] + 1 < r[ptr]) { cout << 2 * ptr << '\n'; } else { int res = 2 * ptr + 1; for (int i = ptr + 1; i < l.size(); i++) { if (l[i] == '9' && r[i] == '0') res++; else break; } cout << res << '\n'; } } return 0; }
2121
F
Yamakasi
You are given an array of integers $a_1, a_2, \ldots, a_n$ and two integers $s$ and $x$. Count the number of subsegments of the array whose sum of elements equals $s$ and whose maximum value equals $x$. More formally, count the number of pairs $1 \leq l \leq r \leq n$ such that: - $a_l + a_{l + 1} + \ldots + a_r = s$. - $\max(a_l, a_{l + 1}, \ldots, a_r) = x$.
To start, let's recall how to solve the following problem: how many subsegments of the array have a sum of elements equal to $s$. Let $pref_i$ be the sum of the first $i$ elements of the array. Then the sum of a subsegment will be equal to $pref_r - pref_{l - 1}$. We will iterate over the right boundary $r$ and count the number of suitable left boundaries $l$, that is, such boundaries that $pref_r - pref_{l - 1} = s$ or $pref_{l - 1} = pref_r - s$. For this, we will additionally maintain a map to keep track of the number of left boundaries with a given prefix sum value for the considered prefix. Now we must also take into account the fact that the maximum value must equal $x$. For this, we will maintain a pointer $lef$ - the smallest left boundary that we have not yet added to the map, but can add when the maximum in the subsegment $(lef, r)$ equals $x$. Next, we have the following options: If $a_r > x$, then we need to clear the map and set $lef = r + 1$, since no good subsegment can contain the element $a_r$. If $a_r = x$, we need to add information about the left boundaries $lef, lef + 1, \ldots, r$ to the map (add information about their values $pref_{l - 1}$). After this, set $lef = r + 1$. If $a_r < x$, nothing changes. And similarly, for each right boundary, we can count the number of suitable left boundaries. The solution can be implemented in $O(n \log n)$.
[ "binary search", "brute force", "data structures", "greedy", "two pointers" ]
null
#include<bits/stdc++.h> using namespace std; typedef long long ll; int const maxn = 2e5 + 5; int a[maxn]; ll pref[maxn], s; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n, x; cin >> n >> s >> x; for (int i = 1; i <= n; i++) { cin >> a[i]; pref[i] = pref[i - 1] + a[i]; } ll ans = 0; map<ll, int> cnt; int lef = 1; for (int r = 1; r <= n; r++) { if (a[r] > x) cnt.clear(), lef = r + 1; else if (a[r] == x) { while (lef <= r) { cnt[pref[lef - 1]]++; lef++; } } ans += cnt[pref[r] - s]; } cout << ans << '\n'; } return 0; }
2121
G
Gangsta
You are given a binary string $s_1s_2 \ldots s_n$ of length $n$. A string $s$ is called binary if it consists only of zeros and ones. For a string $p$, we define the function $f(p)$ as the maximum number of occurrences of any character in the string $p$. For example, $f(00110) = 3$, $f(01) = 1$. You need to find the sum $f(s_ls_{l+1} \ldots s_r)$ for all pairs $1 \leq l \leq r \leq n$.
Let $c_0$ be the number of zeros in the subsegment, and $c_1$ be the number of ones. Notice that $\max(c_0, c_1) = \frac{c_0 + c_1 + |c_0 - c_1|}{2}$. Then we will find the sum $c_0 + c_1 + |c_0 - c_1|$ over all subsegments, and this sum divided by two will be the answer. Let $pref_i$ be the difference between the number of ones and the number of zeros in the prefix of length $i$. Then $f(s_ls_{l+1} \ldots s_r) = r - l + 1 + |pref_r - pref_{l - 1}|$. We will separately calculate the sum $r - l + 1$ over all subsegments. This can be done in $O(n)$: the number of segments of length $len$ will be $n - len + 1$, so we need to find $\sum\limits_{len = 1}^n len \cdot (n - len + 1)$. We still need to find the sum $|pref_r - pref_{l - 1}|$ over all subsegments. To do this, we will sort the array $pref$ (here we need to be careful, as this array consists of $n + 1$ elements, not $n$). After sorting, we will eliminate the absolute value, and we will need to find the sum $pref_r - pref_{l - 1}$ over all subsegments. This sum will be equal to $\sum\limits_{i = 0}^n pref_i \cdot (i - (n - i))$, since in $i$ terms we will take $pref_i$ with a positive sign, and in $n - i$ terms we will take it with a negative sign. The solution can be implemented in $O(n \log n)$.
[ "data structures", "divide and conquer", "math", "sortings" ]
null
#include<bits/stdc++.h> using namespace std; typedef long long ll; int const maxn = 2e5 + 5; int pref[maxn]; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; string s; cin >> s; ll ans = 0; for (int i = 0; i < n; i++) { pref[i + 1] = pref[i]; if (s[i] == '0') pref[i + 1]--; else pref[i + 1]++; } for (int i = 1; i <= n; i++) { ans += (ll)i * (n - i + 1); } sort(pref, pref + n + 1); for (int i = 0; i <= n; i++) { ans += (ll)pref[i] * (i - (n - i)); } cout << ans / 2 << '\n'; } return 0; }
2121
H
Ice Baby
The longest non-decreasing subsequence of an array of integers $a_1, a_2, \ldots, a_n$ is the longest sequence of indices $1 \leq i_1 < i_2 < \ldots < i_k \leq n$ such that $a_{i_1} \leq a_{i_2} \leq \ldots \leq a_{i_k}$. The length of the sequence is defined as the number of elements in the sequence. For example, the length of the longest non-decreasing subsequence of the array $a = [3, 1, 4, 1, 2]$ is $3$. You are given two arrays of integers $l_1, l_2, \ldots, l_n$ and $r_1, r_2, \ldots, r_n$. For each $1 \le k \le n$, solve the following problem: - Consider all arrays of integers $a$ of length $k$, such that for each $1 \leq i \leq k$, it holds that $l_i \leq a_i \leq r_i$. Find the maximum length of the longest non-decreasing subsequence among all such arrays.
Let $dp[i][j]$ be the minimum number $x$ such that we can choose the values of the elements $a_1, a_2, \ldots, a_i$ in such a way that there exists a non-decreasing subsequence of length $j$ with the last element of the subsequence equal to $x$. If it is impossible to choose a subsequence of length $j$, we will consider $dp[i][j] = \inf$. We also note that $dp[i][j] \leq dp[i][j + 1]$. Consider the recalculation of the dynamics when transitioning from $i$ to $i + 1$: If $dp[i][j] \leq l_{i + 1}$, then $dp[i + 1][j] = dp[i][j]$. If $dp[i][j - 1] \leq r_{i + 1}$, then $dp[i + 1][j] = \max(l_{i + 1}, dp[i][j - 1])$. It is worth noting that the number $\max(l_{i + 1}, dp[i][j - 1])$ will definitely not be greater than $dp[i][j]$ (if we have indeed entered this case and not the first one!). If $dp[i][j - 1] > r_{i + 1}$, then $dp[i + 1][j] = dp[i][j]$. The first and third transitions are not of interest to us, as they preserve the previous value of the dynamics for a fixed $j$. We only need to process the second transition. Here we will use the fact that $dp[i][j] \leq dp[i][j + 1]$. Let $lef$ be the smallest index such that $dp[i][lef] \geq l_i$; $righ$ be the largest index such that $dp[i][righ] \leq r_i$. The assertion is: then $dp[i + 1][lef] = l$, $dp[i + 1][j] = dp[i][j - 1]$ for $lef < j \leq righ + 1$. Note that $dp_i$ is a multiset of values, and since $dp[i][j] \leq dp[i][j + 1]$, the positions do not matter; only the values are important (they will uniquely determine the $i$-th layer of dynamics). Let's see how the multiset of values $dp_i$ differs from $dp_{i + 1}$: An element $l$ is added. At most one element is removed, namely the smallest element that is greater than $r$ (if such an element exists). Therefore, it is sufficient to maintain only the multiset of values of the $i$-th layer of dynamics (initially it is empty). We will handle two changes: insertion into the multiset (insert) and removal (erase) of the smallest element that is greater than a given number (such an element can be found using the upper_bound method). The solution can be implemented in $O(n \log n)$.
[ "binary search", "brute force", "data structures", "dp", "implementation", "sortings" ]
null
#include<bits/stdc++.h> using namespace std; main() { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; multiset<int> dp; for (int i = 1; i <= n; i++) { int l, r; cin >> l >> r; auto it = dp.upper_bound(r); if (it != dp.end()) { dp.erase(it); } dp.insert(l); cout << dp.size() << " "; } cout << '\n'; } return 0; }