contest_id
stringlengths 1
4
| index
stringclasses 43
values | title
stringlengths 2
63
| statement
stringlengths 51
4.24k
| tutorial
stringlengths 19
20.4k
| tags
listlengths 0
11
| rating
int64 800
3.5k
⌀ | code
stringlengths 46
29.6k
⌀ |
|---|---|---|---|---|---|---|---|
2093
|
E
|
Min Max MEX
|
You are given an array $a$ of length $n$ and a number $k$.
A subarray is defined as a sequence of one or more consecutive elements of the array. You need to split the array $a$ into $k$ non-overlapping subarrays $b_1, b_2, \dots, b_k$ such that the union of these subarrays equals the entire array. Additionally, you need to maximize the value of $x$, which is equal to the minimum MEX$(b_i)$, for $i \in [1..k]$.
MEX$(v)$ denotes the smallest non-negative integer that is not present in the array $v$.
|
To solve the problem, we use binary search on the answer. To do this, we need to learn how to check for a given $x$ whether there exists a partition that allows achieving an answer of at least $x$. To do this, we will collect the segments one by one, that is, first we will find the minimal valid first segment, then the second, and so on. Since the segments must be non-overlapping and must give the entire array in union, it means that in a correct partition there must exist a segment containing the first element; also, the MEX of this segment must be at least $x$, which means it needs to be increased until the MEX becomes greater than or equal to $x$. As soon as this happens, it makes no sense to further increase the segment, so we move on to the next element and begin to select the next segment. To maintain MEX on a segment, we can use a counting array and a variable in which we store the current MEX. When a new number is added to the segment, we run a while loop, and while there is a number in the segment equal to the current MEX, we increase MEX by one. This works amortized in the number of elements in the segment, since MEX on the segment cannot be greater than the length of the segment. Asymptotic of the solution: $O(n\cdot log(n))$.
|
[
"binary search",
"brute force",
"greedy"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
vector<int> nums(2e5 + 5, 0);
bool check(vector<int>& v, int k, int m)
{
int cnt = 0;
int cur_mex = 0;
for (int i = 0; i < v.size(); i++) {
if (v[i] <= v.size() + 1) {
nums[v[i]] = 1;
}
while (nums[cur_mex]) {
cur_mex++;
}
if (cur_mex >= m) {
cnt++;
for (int j = 0; j < min(m + 1, (int)v.size() + 2); j++) {
nums[j] = 0;
}
cur_mex = 0;
}
}
for (int j = 0; j < v.size() + 2; j++) {
nums[j] = 0;
}
return cnt >= k;
}
void solve()
{
int n, k;
cin >> n >> k;
vector<int> v(n);
for (int i = 0; i < n; i++) {
cin >> v[i];
}
int l = 0;
int r = 1e9;
while (r - l > 1) {
int m = (r + l) / 2;
if (check(v, k, m)) {
l = m;
} else {
r = m;
}
}
cout << l << '\n';
}
signed main()
{
int t;
cin >> t;
for (int i = 0; i < t; i++) {
solve();
}
return 0;
}
|
2093
|
F
|
Hackers and Neural Networks
|
Hackers are once again trying to create entertaining phrases using the output of neural networks. This time, they want to obtain an array of strings $a$ of length $n$.
Initially, they have an array $c$ of length $n$, filled with blanks, which are denoted by the symbol $*$. Thus, if $n=4$, then initially $c=[*,*,*,*]$.
The hackers have access to $m$ neural networks, each of which has its own version of the answer to their request – an array of strings $b_i$ of length $n$.
The hackers are trying to obtain the array $a$ from the array $c$ using the following operations:
- Choose a neural network $i$, which will perform the next operation on the array $c$: it will select a {\textbf{random}} \textbf{blank}, for example, at position $j$, and replace $c_j$ with $b_{i, j}$.For example, if the first neural network is chosen and $c = [*, \text{«like»}, *]$, and $b_1 = [\text{«I»}, \text{«love»}, \text{«apples»}]$, then after the operation with the first neural network, $c$ may become either $[\text{«I»}, \text{«like»}, *]$ or $[*, \text{«like»}, \text{«apples»}]$.
- Choose position $j$ and replace $c_j$ with a blank.
Unfortunately, because of the way hackers access neural networks, they will only be able to see the modified array $c$ after all operations are completed, so they will have to specify the entire sequence of operations in advance.
However, the random behavior of the neural networks may lead to the situation where the desired array is never obtained, or obtaining it requires an excessive number of operations.
Therefore, the hackers are counting on your help in choosing a sequence of operations that will guarantee the acquisition of array $a$ in the minimum number of operations.
More formally, if there exists a sequence of operations that can \textbf{guarantee} obtaining array $a$ from array $c$, then among all such sequences, find the one with the \textbf{minimum} number of operations, and output the number of operations in it.
If there is no sequence of operations that transforms array $c$ into array $a$, then output $-1$.
|
Let's call the operation of the first type a miss if after it $c_i \neq a_i$. Let $x = \max_{i = 1 \dots m}( \sum_{j=1}^{n} b_{i,j} = a_j)$. Notice that in any case we will have at least $n - x$ misses. Why? Since the positions are chosen randomly, and we need to guarantee obtaining the array $a$, we must assume the worst case: if there is a possibility to choose a position $j$ such that $a_j \neq b_{i, j}$, then this will happen. Accordingly, if one neural network has $y$ positions that match the array $a$, it will first choose $n-y$ non-matching positions. Therefore, to minimize the number of misses before the first correct word, we need to select the neural network with the maximum number of positions matching the array $a$ - that is, the neural network on which $x$ from the definition above is achieved. Consequently, whatever we do, we will end up with $n-x$ extra words that need to be removed. This means we have at least $n + (n - x)$ operations. It is impossible to obtain the desired string in fewer than $n - x$ additional operations - after we have removed $n-x$ extra words, we need to fill them in. Thus, the theoretical minimum number of operations is $n + 2 \cdot (n - x)$. We will present an algorithm that achieves this and is therefore optimal. Let's take the neural network where $x$ is achieved; after $n$ operations, we will have $x$ matching words and $n - x$ non-matching ones. These can be swapped using the second operation: suppose we have a non-matching word at the $i$-th position, then we will place a blank there and refer to the neural network where this word is correct - since there is only one blank, the neural network will guarantee to place exactly the word we need. We will do this for all words. This will take us $n - x$ operations of the second type and another $n - x$ operations of the first type - thus we have just reached our minimal estimate.
|
[
"bitmasks",
"brute force",
"greedy"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n, m;
cin >> n >> m;
vector<string> a(n);
for (auto &i: a) {
cin >> i;
}
vector<vector<string>> b(m, vector<string>(n));
for (auto &ar: b) {
for (auto &s: ar) {
cin >> s;
}
}
vector<bool> exists(n);
int x = 0;
for (int i = 0; i < m; i++) {
int cnt = 0;
for (int j = 0; j < n; j++) {
if (a[j] == b[i][j]) {
cnt++;
exists[j] = true;
}
}
x = max(x, cnt);
}
if (all_of(exists.begin(), exists.end(), identity())) {
cout << n + 2 * (n - x) << "\n";
} else {
cout << "-1\n";
}
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
|
2093
|
G
|
Shorten the Array
|
The beauty of an array $b$ of length $m$ is defined as $\max(b_i \oplus b_j)$ among all possible pairs $1 \le i \le j \le m$, where $x \oplus y$ is the bitwise XOR of numbers $x$ and $y$. We denote the beauty value of the array $b$ as $f(b)$.
An array $b$ is called beautiful if $f(b) \ge k$.
Recently, Kostya bought an array $a$ of length $n$ from the store. He considers this array too long, so he plans to cut out some beautiful subarray from it. That is, he wants to choose numbers $l$ and $r$ ($1 \le l \le r \le n$) such that the array $a_{l \dots r}$ is beautiful. The length of such a subarray will be the number $r - l + 1$. The entire array $a$ is also considered a subarray (with $l = 1$ and $r = n$).
Your task is to find the length of the shortest beautiful subarray in the array $a$. If no subarray is beautiful, you should output the number $-1$.
|
First, note that in the optimal segment $[l, r]$, the maximum value of $a_i \oplus a_j$ must be achieved precisely when $i = l$ and $j = r$. Otherwise, we can shift at least one of the boundaries, thereby reducing the length of the found segment. Our task then becomes to find the nearest pair of indices $i$ and $j$ such that $a_i \oplus a_j \ge k$. Next, we consider all numbers in the array as binary strings padded with leading zeros to a length of $30$. Note that ordinary comparison of numbers is equivalent to lexicographical order on such strings. Let $x = x_{29}x_{28}{\dots}x_{1}x_{0}$, $y = y_{29}y_{28}{\dots}y_{1}y_{0}$, $k = k_{29}k_{28}{\dots}k_{1}k_{0}$, then the condition $x \oplus y \ge k$ is equivalent to the existence of a bit $i$ such that $x_{29} \oplus y_{29} = k_{29}, \dots, x_{i + 1} \oplus y_{i + 1} = k_{i + 1}$ and $x_{i} \oplus y_{i} > k_{i}$. Or $x \oplus y = k$, and then for any $i$ it will hold that $x_i \oplus y_i = k_i$. Suppose we have fixed the values of $x$, $k$, and $i$, then we need to find such a number $y$ that $y_{29} = x_{29} \oplus k_{29}, \dots y_{i + 1} = x_{i + 1} \oplus k_{i + 1}$ and $k_i = 0$ and $x_i \neq y_i$. For the given $a_i$ and $k$, we are interested in the nearest $a_j$ that satisfies these conditions. We will traverse the array $a$ from left to right and maintain in a binary trie all the numbers we have already passed. We will also keep track of the maximum index among the added numbers in the corresponding subtree at each node of the trie. Thus, the search for the nearest suitable $a_j$ for a given $a_i$ will look like a descent in the trie along the string $a_i \oplus k$. If the value in the $j$-th bit of the number $k$ is $1$, we need to descend along the edge $a_{ij} \oplus 1$. Otherwise, if $k_j = 0$, we should check the maximum index in the subtree $a_{ij} \oplus 1$, but we will descend along the edge $a_{ij} \oplus 0$. Thus, after processing all the numbers in the array, we will find a pair of nearest $a_i$, $a_j$, for which $a_i \oplus a_j \ge k$. The solution works in $O(N \log{10^9})$.
|
[
"binary search",
"bitmasks",
"data structures",
"dfs and similar",
"greedy",
"strings",
"trees",
"two pointers"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
const int LOG_X = 29;
struct node {
int children[2] { -1, -1 };
int last = -1;
};
int find(const vector<node>& trie, int value, int border) {
int res = -1;
int current = 0;
bool ok = true;
for (int position = LOG_X; ok && position >= 0; position--) {
int x_bit = (value >> position) & 1;
int k_bit = (border >> position) & 1;
auto& children = trie[current].children;
if (k_bit == 1) {
if (children[x_bit ^ 1] != -1) {
current = children[x_bit ^ 1];
} else {
ok = false;
}
} else {
if (children[x_bit ^ 1] != -1) {
res = max(res, trie[children[x_bit ^ 1]].last);
}
if (children[x_bit] != -1) {
current = children[x_bit];
} else {
ok = false;
}
}
}
if (ok) {
res = max(res, trie[current].last);
}
return res;
}
void add(vector<node>& trie, int value, int index) {
int current = 0;
trie[current].last = max(trie[current].last, index);
for (int position = LOG_X; position >= 0; position--) {
int x_bit = (value >> position) & 1;
if (trie[current].children[x_bit] == -1) {
trie[current].children[x_bit] = trie.size();
trie.push_back(node());
}
current = trie[current].children[x_bit];
trie[current].last = max(trie[current].last, index);
}
}
void solve() {
int n, k;
cin >> n >> k;
int ans = n + 1;
vector<node> trie(1);
for (int i = 0; i < n; i++) {
int x;
cin >> x;
add(trie, x, i);
int y = find(trie, x, k);
if (y != -1) {
ans = min(ans, i - y + 1);
}
}
if (ans == n + 1) {
cout << "-1\n";
} else {
cout << ans << '\n';
}
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
|
2094
|
A
|
Trippi Troppi
|
Trippi Troppi resides in a strange world. The ancient name of each country consists of three strings. The first letter of each string is concatenated to form the country's modern name.
Given the country's ancient name, please output the modern name.
|
This can be solved by printing the zero-th index of each string in sequence. For example, suppose your strings are $a$, $b$, and $c$. Then in C++, you can use cout << a[0] << b[0] << c[0] << '\n';, and similarly, in Python you can use print(a[0] + b[0] + c[0]). See your preferred language's syntax for how to obtain a given indexed character from a string.
|
[
"strings"
] | 800
|
t = int(input())
for _ in range(t):
inp = input()
for w in inp.split():
print(w[0],end="")
print()
|
2094
|
B
|
Bobritto Bandito
|
In Bobritto Bandito's home town of residence, there are an infinite number of houses on an infinite number line, with houses at $\ldots, -2, -1, 0, 1, 2, \ldots$. On day $0$, he started a plague by giving an infection to the unfortunate residents of house $0$. Each succeeding day, the plague spreads to \textbf{exactly one} healthy household that is next to an infected household. It can be shown that each day the infected houses form a continuous segment.
Let the segment starting at the $l$-th house and ending at the $r$-th house be denoted as $[l, r]$. You know that after $n$ days, the segment $[l, r]$ became infected. Find any such segment $[l', r']$ that could have been infected on the $m$-th day ($m \le n$).
|
What is the condition for $l'$ and $r'$ to be valid? We must have $r' - l' = m$ and $l\leq l' \leq 0 \leq r'\leq r$. The problem reduces to finding $l'$ and $r'$ such that $r' - l' = m$ and $l\leq l' \leq 0 \leq r'\leq r$. There are several different approaches; two are outlined below. One approach which runs in $O(1)$ time is to case on whether $m\leq r$. If $m\leq r$, we can choose $l' = 0$ and $r' = m$, and then we see that $r' - l' = m - 0 = m$ and $l \leq 0 \leq 0 \leq m \leq r$ as desired. If $m>r$, we can choose $l' = r-m$ and $r' = r$, and then we see that $r' - l' = r - (r-m) = m$ and $l \leq r-m \leq 0 \leq r \leq r$, where $l \leq r-m$ holds because $r-l = n \geq m$. An alternative approach which runs in $O(m)$ time is to simply simulate the process; expand in an arbitrary direction which satisfies the desired inequalities until $r' - l' = m$.
|
[
"brute force",
"constructive algorithms"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
int n, m, l, r; cin >> n >> m >> l >> r;
int diff = n - m;
l = abs(l);
if (l >= diff) {
l -= diff;
diff = 0;
}
else {
diff -= l;
l = 0;
}
cout << -l << " " << r - diff << '\n';
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2094
|
C
|
Brr Brrr Patapim
|
Brr Brrr Patapim is trying to learn of Tiramisù's secret passcode, which is a permutation$^{\text{∗}}$ of $2\cdot n$ elements. To help Patapim guess, Tiramisù gave him an $n\times n$ grid $G$, in which $G_{i,j}$ (or the element in the $i$-th row and $j$-th column of the grid) contains $p_{i+j}$, or the $(i+j)$-th element in the permutation.
Given this grid, please help Patapim crack the forgotten code. It is guaranteed that the permutation exists, and it can be shown that the permutation can be determined uniquely.
\begin{footnotesize}
$^{\text{∗}}$A permutation of $m$ integers is a sequence of $m$ integers which contains each of $1,2,\ldots,m$ exactly once. For example, $[1, 3, 2]$ and $[2, 1]$ are permutations, while $[1, 2, 4]$ and $[1, 3, 2, 3]$ are not.
\end{footnotesize}
|
How can we find $p_i$ for $2\leq i\leq n$? How can we find $p_i$ for $n+1\leq i\leq 2n$? For $2\leq i\leq n$, we can find $p_i$ by checking $G_{1, i-1}$. For $n+1\leq i\leq 2n$, we can find $p_i$ by checking $G_{n, i-n}$. Observe that for all $2\leq i\leq n$, we can find $p_i$ by checking $G_{1, i-1}$. Then for all $n+1\leq i\leq 2n$, we can find $p_i$ by checking $G_{n, i-n}$. Now, we only have to find the value of $p_1$. However, each value from $1$ to $2n$ must appear exactly once in $p$. Thus, we can simply find the value which has not appeared yet, and choose that as $p_1$. This can be done by storing which values have been seen in a boolean array, and then finding which one remains false.
|
[
"math"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
int n; cin >> n;
vector<int> ans(2*n+1, 0);
vector<bool> used(2*n+1, false);
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
int x; cin >> x;
ans[i + j] = x;
used[x] = true;
}
}
for (int i = 1; i <= 2 * n; i++) {
if (ans[i] != 0) cout << ans[i] << " ";
else {
for (int j = 1; j <= n * 2; j++) {
if (!used[j]) {
used[j] = true;
cout << j << " ";
break;
}
}
}
}
cout << "\n";
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2094
|
D
|
Tung Tung Sahur
|
You have two drums in front of you: a left drum and a right drum. A hit on the left can be recorded as "L", and a hit on the right as "R".
The strange forces that rule this world are fickle: sometimes, a blow sounds once, and sometimes it sounds twice. Therefore, a hit on the left drum could have sounded as either "L" or "LL", and a hit on the right drum could have sounded as either "R" or "RR".
The sequence of hits made is recorded in the string $p$, and the sounds heard are in the string $s$. Given $p$ and $s$, determine whether it is true that the string $s$ could have been the result of the hits from the string $p$.
For example, if $p=$"LR", then the result of the hits could be any of the strings "LR", "LRR", "LLR", and "LLRR", but the strings "LLLR" or "LRL" cannot.
|
What would happen if $s_1$ consists of only $L$? In this case, the answer is yes if and only if $|s_1| \leq |s_2| \leq 2|s_1|$. Now try to generalize this. First, let us consider a simpler case of the problem: suppose that there is only $L$ in both $s_1$ and $s_2$. Then it is clear that the answer is yes if and only if $|s_1| \leq |s_2| \leq 2|s_1|$. If this holds (in particular, we also necessitate that $s_1$ and $s_2$ consist of only a single character, and the same character), we will call $s_2$ an extension of $s_1$. Now, observe that the problem given is simply the simpler version, repeated several times alternating between $L$ and $R$. So we can partition $s_1$ into "blocks" (where we define a block to be a maximal group of contiguous identical characters). Then the answer is yes if and only if $s_2$ is the concatenation of extensions of blocks in $s_1$. For example, consider $s_1 = LLLLLRL$. We see that this consists of five $L$s, one $R$, and one $L$. So the answer is yes if and only if $s_2$ consists of between five and ten $L$s (inclusive), between one and two $R$s, and between one and two $L$ 's, concatenated in that order.
|
[
"greedy",
"strings",
"two pointers"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
string a, b; cin >> a >> b;
int n = a.size();
int m = b.size();
if (m < n || m > 2 * n || a[0] != b[0]) {
cout << "NO\n";
return;
}
vector<int> aa, bb;
int cnt = 1;
for (int i = 1; i < n; i++) {
if (a[i] != a[i-1]) {
aa.push_back(cnt);
cnt = 1;
}
else cnt++;
}
aa.push_back(cnt);
cnt = 1;
for (int i = 1; i < m; i++) {
if (b[i] != b[i-1]) {
bb.push_back(cnt);
cnt = 1;
}
else cnt++;
}
bb.push_back(cnt);
if (aa.size() != bb.size()) {
cout << "NO\n";
return;
}
n = aa.size();
for (int i = 0; i < n; i++) {
if (aa[i] > bb[i] || aa[i] * 2 < bb[i]) {
cout << "NO\n";
return;
}
}
cout << "YES\n";
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2094
|
E
|
Boneca Ambalabu
|
Boneca Ambalabu gives you a sequence of $n$ integers $a_1,a_2,\ldots,a_n$.
Output the maximum value of $(a_k\oplus a_1)+(a_k\oplus a_2)+\ldots+(a_k\oplus a_n)$ among all $1 \leq k \leq n$. Note that $\oplus$ denotes the bitwise XOR operation.
|
Consider each bit independently. Suppose we fix $k$. How can we compute the desired sum quickly? Note that this may require preprocessing. Here, it suffices to consider each bit independently, because addition is both commutative and associative. For $0\leq i < 30$, let $cnt_i$ denote the number of elements of $a$ which have the $i$-th bit set (where bits are indexed from the least significant bit being the $0$-th bit); note that these can be found in $O(30n)$ time (more generally, we need $O(\log\max{a_i})$ operations per element). Now, suppose we choose a particular $k$. Then we can find the sum $(a_k\oplus a_1) + (a_k\oplus a_2) \dots + (a_k\oplus a_n)$ as follows: for a given bit position $i$, the number of elements $a_j$ such that $a_k \oplus a_j$ has the $i$-th bit set will be $cnt_i$ if the $i$-th bit of $a_k$ is not set, and $n-cnt_i$ if the $i$-th bit of $a_k$ is set. So we can simply add $cnt_i \cdot 2^i$ to our sum if the $i$-th bit of $a_k$ is not set, and add $(n-cnt_i) \cdot 2^i$ to our sum if the $i$-th bit of $a_k$ is set. This allows us to compute $(a_k\oplus a_1) + (a_k\oplus a_2) \dots + (a_k\oplus a_n)$ for any $k$ in $O(30)$ time, so we can now compute this sum for all $k$ in $O(30n)$ time and output the maximum.
|
[
"bitmasks"
] | 1,200
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
int n; cin >> n;
int arr[n+1];
vector<int> cnt(30, 0);
for (int i = 1; i <= n; i++) {
cin >> arr[i];
for (int j = 0; j < 30; j++) {
cnt[j] += ((arr[i] >> j) & 1);
}
}
int ans = 0;
for (int i = 1; i <= n; i++) {
int tot = 0;
for (int j = 0; j < 30; j++) {
bool f = ((arr[i] >> j) & 1);
if (f) tot += (1 << j) * (n - cnt[j]);
else tot += (1 << j) * cnt[j];
}
ans = max(ans, tot);
}
cout << ans << "\n";
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2094
|
F
|
Trulimero Trulicina
|
Trulicina gives you integers $n$, $m$, and $k$. It is guaranteed that $k\geq 2$ and $n\cdot m\equiv 0 \pmod{k}$.
Output a $n$ by $m$ grid of integers such that each of the following criteria hold:
- Each integer in the grid is between $1$ and $k$, inclusive.
- Each integer from $1$ to $k$ appears an equal number of times.
- No two cells that share an edge have the same integer.
It can be shown that such a grid always exists. If there are multiple solutions, output any.
|
What would happen if we tried outputting the numbers $1, \dots, k, 1, \dots, k, \dots$ in the natural reading order? For example, for $n=3$, $m=4$, and $k=6$, we would output the following: In which cases does this fail? This fails if and only if $m$ is a multiple of $k$, because we see that horizontally adjacent elements differ by $1\pmod k$ and vertically adjacent elements differ by $m\pmod k$. Now try to resolve this case. There are many working constructions for this problem; one of them is to case on whether or not $m$ is a multiple of $k$. Suppose $m$ is not a multiple of $k$. For example, consider the second sample, where $n=3$, $m=4$, and $k=6$. Then we can output the numbers $1, \dots, k, 1, \dots, k, \dots$ in the natural reading order, as follows: and we see that any two horizontally adjacent elements differ by $1\pmod k$, whereas any two vertically adjacent elements differ by $m\pmod k$, so no two adjacent elements are the same, as desired. Now, suppose $m$ is a multiple of $k$. For example, consider the case $n=4$, $m=6$, and $k=3$. Suppose we were to try the above strategy, then we get: which doesn't work, since vertically adjacent elements are the same. We can fix this by cyclically shifting every other row as follows: and we see that any two horizontally adjacent elements differ by $1\pmod k$ as before, whereas any two vertically adjacent elements also differ by $1\pmod k$, so no two adjacent elements are the same, as desired.
|
[
"constructive algorithms"
] | 1,600
|
import sys
input = sys.stdin.readline
for _ in range(int(input())):
n,m,k = map(int,input().split())
LAST = [-1 for _ in range(m)]
for i in range(n):
shift = False
CUR = [0 for _ in range(m)]
for j in range(m):
elm = ((i * m + j) % k) + 1
if elm == LAST[j]:
shift = True
CUR[j] = elm
if shift:
CUR = [CUR[(j+1)%m] for j in range(m)]
print(*CUR)
LAST = CUR
|
2094
|
G
|
Chimpanzini Bananini
|
Chimpanzini Bananini stands on the brink of a momentous battle—one destined to bring finality.
For an arbitrary array $b$ of length $m$, let's denote the rizziness of the array to be $\sum_{i=1}^mb_i\cdot i=b_1\cdot 1+b_2\cdot 2+b_3\cdot 3+\ldots + b_m\cdot m$.
Chimpanzini Bananini gifts you an empty array. There are three types of operations you can perform on it.
- Perform a cyclic shift on the array. That is, the array $[a_1, a_2, \ldots, a_n]$ becomes $[a_n, a_1, a_2, \ldots, a_{n-1}].$
- Reverse the entire array. That is, the array $[a_1, a_2, \ldots, a_n]$ becomes $[a_n, a_{n-1}, \ldots, a_1].$
- Append an element to the end of the array. The array $[a_1, a_2, \ldots, a_n]$ becomes $[a_1, a_2, \ldots, a_n, k]$ after appending $k$ to the end of the array.
After each operation, you are interested in calculating the rizziness of your array.
Note that all operations are \textbf{persistent}. This means that each operation modifies the array, and subsequent operations should be applied to the current state of the array after the previous operations.
|
Note that reversing an array then pushing an element to the back is similar to simply pushing an element to the front. Thus, we would like to push elements to both ends of the array. What data structure supports this? A deque is the most natural choice to use. Most languages, including C++, Java, and Python, include deques in their standard library, so you likely do not have to implement it yourself. What other values do we need to maintain in order to keep track of score? It suffices to keep track of the current score, the score of the array if it were backwards, the size of the array, and the sum of the array. We can solve this using a deque. Let $score$ denote the current score, and $rscore$ denote the score of the array if it were backwards. Let $size$ denote the size of the array, and $sum$ denote the sum of the array. Then we consider how these three values change as each operation is performed. Suppose operation 1 is performed. This is equivalent to popping the back element of the array and pushing it in the front. When you pop the back element of the array, $score$ decreases by $a_n \cdot size$. Then when you push it to the front, $score$ increases by $sum$ because $a_n$ is pushed to the first spot and every element from $a_1$ to $a_{n-1}$ moves forward one spot. Notice that $rscore$ changes in the reverse way; this is equivalent to popping the front element of the array and pushing it to the back. When you pop the front element of the array, $rscore$ decreases by $sum$, and when you push it to the back, $rscore$ increases by $a_n \cdot size$. Note that $size$ and $sum$ remain unchanged. Suppose operation 2 is performed. Then we swap $score$ and $rscore$, and we also want to "reverse" the array. However, it is costly to entirely reverse the deque. Instead, we will set a flag to indicate that the array has been reversed (and similarly, if the flag is already set, we will unset it). If the flag is set, we simply switch the front and back ends whenever we access or modify the deque while performing operations 1 or 3. Suppose operation 3 is performed. Then we see that $size$ increases by $1$ and $sum$ increases by $k$. Then $score$ increases by $k \cdot size$ and $rscore$ increases by $sum$ (using the new value of $sum$), by identical reasoning to that in operation 1. Additional optimization: we don't actually have to maintain $rscore$; we can obtain it with the expression: $(n+1)\sum_{i=1}^na_i-score$. This is because $score + rscore = (n+1)\sum_{i=1}^na_i$ since the $i$-th term is counted $i$ times in $score$ and $n+1-i$ times in $rscore$.
|
[
"data structures",
"implementation",
"math"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
int norm = 0, rev = 0;
int q; cin >> q;
int tot = 0;
int n = 0;
deque<int> qNorm, qRev;
int p = 0;
while (q--) {
int s; cin >> s;
if (s == 1) {
int last = qNorm.back();
qNorm.pop_back();
qNorm.push_front(last);
norm += (tot - last);
norm -= last * n;
norm += last;
last = qRev.front();
qRev.pop_front();
qRev.push_back(last);
rev -= (tot - last);
rev += last * n;
rev -= last;
}
else if (s == 2) {
swap(rev, norm);
swap(qNorm, qRev);
}
else if (s == 3) {
n++;
int k; cin >> k;
qNorm.push_back(k);
qRev.push_front(k);
norm += k * n;
rev += tot;
rev += k;
tot += k;
}
cout << norm << "\n";
}
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2094
|
H
|
La Vaca Saturno Saturnita
|
Saturnita's mood depends on an array $a$ of length $n$, which only he knows the meaning of, and a function $f(k, a, l, r)$, which only he knows how to compute. Shown below is the pseudocode for his function $f(k, a, l, r)$.
\begin{verbatim}
function f(k, a, l, r):
ans := 0
for i from l to r (inclusive):
while k is divisible by a[i]:
k := k/a[i]
ans := ans + k
return ans
\end{verbatim}
You are given $q$ queries, each containing integers $k$, $l$, and $r$. For each query, please output $f(k,a,l,r)$.
|
What must the value of $a[i]$ be in order for $k$ to change? $a[i]$ must be a divisor of $k$. Can we use this to bound the number of times $k$ changes? For each divisor $d$ of $k$, $k$ can only change once, because after the first iteration with $a[i] = d$, $k$ is no longer divisible by $d$. Thus, the number of times $k$ changes is bounded by the number of divisors of $k$. Consider a particular query. We make the following two observations: first, the value of $k$ only changes if $a[i]$ is a divisor of $k$, and second, if there exists $a[i_1] = a[i_2]$ with $l\leq i_1 < i_2 \leq r$, then the value of $k$ will not change at $i=i_2$ (because after the iteration $i=i_1$, we have that $k$ is no longer divisible by $a[i_2]$). This allows us to bound the number of times $k$ changes by $d(k)$, where $d(k)$ is the number of divisors of $k$. Note that the divisors of $d(k)$ can be found in $O(\sqrt{k})$ time by checking if $a$ divides $k$ for every $a$ from $2$ to $\sqrt{k}$, and if so, $a$ and $\frac{k}{a}$ are both divisors of $k$. Furthermore, at the scale constrained by the problem bounds, $d(k)$ is around $O(\sqrt[3]{k})$, so we only have to update $k$ at most $O(\sqrt[3]{k})$ times. Now, we would like to find the first time each divisor of $k$ appears in $a[l], \dots, a[r]$. This can be done by storing the array $a$ in a map, where each value is mapped to a vector of indices where that values appears. Then for a given divisor of $k$, we can use lower_bound() (or generally, binary search) to find the smallest index at or after $l$ where this divisor appears, and we can check if it is no greater than $r$. Now, we have a list of indices at which $k$ might change. So, suppose that $k$ changes to $k'$ at index $i_1$, and then changes to $k' '$ at index $i_2$ (and these are adjacent changes). Then we know that we have a value of $k'$ from $i=i_1$ to $i_2-1$, so we can add $k' \cdot (i_2 - i_1)$ to $ans$. We can then do this for all changes. This allows us to solve the problem in $O(n\log{n})$ preprocessing time and $O(\sqrt{k} + d(k)\log{n})$ time per query. There are a few ways to optimize the runtime further, if your implementation is too slow. We can use a vector (of vectors) instead of a map, since $A = \max a_i = 10^5$ is reasonably small. Note that instantiating a vector of this size in every test case is too slow, so you may have to instantiate it globally and clear the vectors that were used after each test case. Another optimization is to preprocess divisors instead of computing them on the spot. We can compute and store the divisors of all integers $2\leq a_i\leq 10^5$ in $O(A\log A)$ time in a vector of vectors where $divisor[a_i]$ contains all divisors of $a_i$ as follows: for all $2\leq i\leq A$, we push $i$ into $divisors[j]$ for all multiples $j$ of $i$. (The runtime follows from the fact that there are $\lfloor\frac{A}{i}\rfloor$ multiples of $i$ which are at most $A$, so across all $i$, we have at most $\frac{A}{1} + \frac{A}{2} + \cdots + \frac{A}{A} = A(\frac{1}{1} + \cdots + \frac{1}{A}) = AH(A) = O(A\log A)$ computations). This allows us to remove the $\sqrt{k}$ term in the runtime of each query.
|
[
"binary search",
"brute force",
"math",
"number theory"
] | 1,900
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
map<int, vector<int>> pos;
map<int, int> ptr;
int n, q; cin >> n >> q;
int arr[n+1];
for (int i = 1; i <= n; i++) {
cin >> arr[i];
pos[arr[i]].push_back(i);
ptr[arr[i]] = 0;
}
vector<tuple<int, int, int, int>> qr(q);
int c = 0;
for (auto &[l, r, v, i]: qr) {
cin >> v >> l >> r;
i = c++;
}
vector<int> anss(q);
sort(qr.begin(), qr.end());
int prevL = 1;
for (auto [l, r, v, idd]: qr) {
for (int j = prevL; j < l; j++) {
ptr[arr[j]]++;
}
prevL = l;
vector<pair<int, int>> facts;
for (int i = 1; i * i <= v; i++) {
if (v % i == 0) {
int fact1 = v / i;
if (!(pos[fact1].size() == 0 || ptr[fact1] >= pos[fact1].size() || pos[fact1][ptr[fact1]] > r || fact1 == 1)) {
facts.push_back({pos[fact1][ptr[fact1]], fact1});
}
int fact2 = i;
if (fact2 == fact1) continue;
if (!(pos[fact2].size() == 0 || ptr[fact2] >= pos[fact2].size() || pos[fact2][ptr[fact2]] > r || fact2 == 1)) {
facts.push_back({pos[fact2][ptr[fact2]], fact2});
}
}
}
sort(facts.begin(), facts.end());
int pr = l;
int ans = 0;
for (auto [idx, val]: facts) {
int rr = idx - pr;
ans += v * rr;
while (v % val == 0) v /= val;
pr = idx;
}
if (facts.empty()) ans = v * (r - l + 1);
else ans += (r - pr + 1) * v;
anss[idd] = ans;
}
for (int i = 0; i < q; i++) cout << anss[i] << "\n";
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2096
|
A
|
Wonderful Sticks
|
You are the proud owner of $n$ sticks. Each stick has an integer length from $1$ to $n$. The lengths of the sticks are \textbf{distinct}.
You want to arrange the sticks in a row. There is a string $s$ of length $n - 1$ that describes the requirements of the arrangement.
Specifically, for each $i$ from $1$ to $n - 1$:
- If $s_i = <$, then the length of the stick at position $i + 1$ must be \textbf{smaller} than all sticks before it;
- If $s_i = >$, then the length of the stick at position $i + 1$ must be \textbf{larger} than all sticks before it.
Find any valid arrangement of sticks. We can show that an answer always exists.
|
Create your own test cases and solve them! What can you observe? What is the length of the $n$-th stick? If $s_{n - 1} = \texttt{<}$, then $a_n = 1$, because the $n$-th stick must be shorter than all the other sticks. If $s_{n - 1} = \texttt{>}$, then $a_n = n$, because the $n$-th stick must be longer than all the other sticks. Can we do something similar for the remaining sticks? Yes! Remove the $n$-th stick, and then solve for the remaining $n - 1$ sticks. We can make similar observations about the $(n - 1)$-th stick. Then, remove the $(n - 1)$-th stick, and solve for the remaining $n - 2$ sticks, and so on. So the algorithm is as follows: Initialize an array $b = [1, 2, \ldots, n]$. This represents the lengths of the remaining sticks. For each $i$ from $n - 1$ to $1$: If $s_i = \texttt{<}$, then set $a_{i + 1} = \min(b)$. If $s_i = \texttt{>}$, then set $a_{i + 1} = \max(b)$. Now remove $a_{i + 1}$ from $b$, and continue. If $s_i = \texttt{<}$, then set $a_{i + 1} = \min(b)$. If $s_i = \texttt{>}$, then set $a_{i + 1} = \max(b)$. Now remove $a_{i + 1}$ from $b$, and continue. In the end, we have $1$ remaining element in $b$. This is the value of $a_1$. The time complexity of this solution is $O(n^2)$. We can notice that at any point, $b$ has the form $[l, l + 1, \ldots, r]$. So we can represent $b$ using two integers $l$ and $r$: When we remove $\min(b)$, we increase $l$ by $1$. When we remove $\max(b)$, we decrease $r$ by $1$. Now the time complexity of our solution is $O(n)$.
|
[
"constructive algorithms",
"greedy"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
void test() {
int n;
cin >> n;
string s;
cin >> s;
int l = 1;
int r = n;
vector<int> a(n);
for (int i = n - 2; i >= 0; i--) {
if (s[i] == '<') {
a[i + 1] = l;
l++;
}
if (s[i] == '>') {
a[i + 1] = r;
r--;
}
}
a[0] = l;
for (int i = 0; i < n; i++) {
cout << a[i] << " ";
}
cout << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
for (int i = 0; i < t; i++) {
test();
}
return 0;
}
|
2096
|
B
|
Wonderful Gloves
|
You are the proud owner of many colorful gloves, and you keep them in a drawer. Each glove is in one of $n$ colors numbered $1$ to $n$. Specifically, for each $i$ from $1$ to $n$, you have $l_i$ left gloves and $r_i$ right gloves with color $i$.
Unfortunately, it's late at night, so \textbf{you can't see any of your gloves}. In other words, you will only know the color and the type (left or right) of a glove \textbf{after} you take it out of the drawer.
A matching pair of gloves with color $i$ consists of exactly one left glove and one right glove with color $i$. Find the minimum number of gloves you need to take out of the drawer to \textbf{guarantee} that you have \textbf{at least} $k$ matching pairs of gloves with \textbf{different} colors.
Formally, find the smallest positive integer $x$ such that:
- For any set of $x$ gloves you take out of the drawer, there will always be at least $k$ matching pairs of gloves with different colors.
|
Let's look at the third test case in the example. We have $k = 2$, and the answer is $x = 303$. If we take out $302$ gloves, then it's possible that we only have matching pairs of $1$ color. How can we generalize this fact for any $k$ and $x$? If we take out $(x - 1)$ gloves or fewer, then it's possible that we only have at most $(k - 1)$ matching pairs of different colors. So we can solve the problem as follows: Let $y$ be the maximum number of gloves we can take out so that there are at most $(k - 1)$ matching pairs of different colors. Then the answer to the problem is $x = y + 1$. Set $m = k - 1$. Now we need to find the maximum number of gloves so that there are at most $m$ matching pairs of different colors. Try solving it for $m = 0$ first. When $m = 0$, we want to find the maximum number of gloves so that there are no matching pairs. Therefore, for each color $i$, we can either take all the left gloves, or all the right gloves. So we should choose the type with more gloves. Formally, let $a_i = \max(l_i, r_i)$. Then the maximum number of gloves we can take is $y = a_1 + a_2 + \ldots + a_n$. Now, using the solution for $m = 0$, solve it for $m = 1$. Then solve it for $m = 2$, and so on. When $m = 0$, we take $a_i$ gloves of color $i$. So we're left with $b_i = \min(l_i, r_i)$ gloves of color $i$ that we haven't taken. To get from $m = 0$ to $m = 1$, we can additionally form matching pairs of $1$ color. Therefore, we can choose a color $i$, and take the remaining $b_i$ gloves. So we should choose the color $i$ with the maximum value of $b_i$. In other words, for $m = 1$, the maximum number of gloves we can take is $y = a_1 + a_2 + \ldots + a_n + \max(b_1, b_2, \ldots, b_n)$. To get from $m = 0$ to $m = 2$, we can additionally form matching pairs of $2$ different colors. Therefore, we can choose two different colors $i$ and $j$, and take the remaining $b_i + b_j$ gloves. So we should choose the two colors $i$ and $j$ with the maximum value of $b_i + b_j$. We can generalize this idea for any $m$. So the algorithm is as follows: Set $m = k - 1$. For each $i$ from $1$ to $n$, let $a_i = \max(l_i, r_i)$ and $b_i = \min(l_i, r_i)$. Set $y = a_1 + a_2 + \ldots + a_n$. Sort the values of $b$ in descending order. Add the $m$ largest values of $b$ to $y$. The answer is $x = y + 1$. The time complexity of this solution is $O(n \log n)$.
|
[
"greedy",
"math",
"sortings"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
void test() {
int n, k;
cin >> n >> k;
int m = k - 1;
vector<int> l(n);
for (int i = 0; i < n; i++) {
cin >> l[i];
}
vector<int> r(n);
for (int i = 0; i < n; i++) {
cin >> r[i];
}
vector<int> a(n), b(n);
long long y = 0;
for (int i = 0; i < n; i++) {
a[i] = max(l[i], r[i]);
b[i] = min(l[i], r[i]);
y += a[i];
}
sort(b.begin(), b.end(), greater<>());
for (int i = 0; i < m; i++) {
y += b[i];
}
long long x = y + 1;
cout << x << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
for (int i = 0; i < t; i++) {
test();
}
return 0;
}
|
2096
|
C
|
Wonderful City
|
You are the proud leader of a city in Ancient Berland. There are $n^2$ buildings arranged in a grid of $n$ rows and $n$ columns. The height of the building in row $i$ and column $j$ is $h_{i, j}$.
The city is beautiful if no two adjacent by side buildings have the same height. In other words, it must satisfy the following:
- There \textbf{does not} exist a position $(i, j)$ ($1 \leq i \leq n$, $1 \leq j \leq n - 1$) such that $h_{i, j} = h_{i, j + 1}$.
- There \textbf{does not} exist a position $(i, j)$ ($1 \leq i \leq n - 1$, $1 \leq j \leq n$) such that $h_{i, j} = h_{i + 1, j}$.
There are $n$ workers at company A, and $n$ workers at company B. Each worker can be hired \textbf{at most once}.
It costs $a_i$ coins to hire worker $i$ at company A. After hiring, worker $i$ will:
- Increase the heights of all buildings in row $i$ by $1$. In other words, increase $h_{i, 1}, h_{i, 2}, \ldots, h_{i, n}$ by $1$.
It costs $b_j$ coins to hire worker $j$ at company B. After hiring, worker $j$ will:
- Increase the heights of all buildings in column $j$ by $1$. In other words, increase $h_{1, j}, h_{2, j}, \ldots, h_{n, j}$ by $1$.
Find the minimum number of coins needed to make the city beautiful, or report that it is impossible.
|
We are given an $n \times n$ matrix of positive integers. There are two types of operations we can perform: When we hire worker $i$ from company A, let's call it row operation $i$. When we hire worker $j$ from company B, let's call it column operation $j$. Each row and column operation can be performed at most once. After performing the operations, the matrix must satisfy the following: Horizontal Condition: No two horizontally adjacent elements are the same. Vertical Condition: No two vertically adjacent elements are the same. Suppose the matrix does not satisfy the Horizontal Condition, so there is a position $(i, j)$ such that $h_{i,j} = h_{i, j + 1}$. What operations can we perform to fix this? Don't worry about other positions for now. We just want $h_{i,j} \neq h_{i, j + 1}$. We can either: Only perform column operation $j$, and increase $h_{i, j}$ by $1$; Or only perform column operation $(j + 1)$, and increase $h_{i, j + 1}$ by $1$. Notice that when we perform row operation $i$, we increase both $h_{i, j}$ and $h_{i, j + 1}$ by $1$. Therefore, the difference between the two elements does not change. Formally, when we perform row operation $i$, the value $d = h_{i, j} - h_{i, j + 1}$ does not change. From our previous observations, we see that: Only column operations affect the Horizontal Condition. Only row operations affect the Vertical Condition. So we can solve for the Horizontal Condition and the Vertical Condition independently. Using DP, we will calculate: The minimum total cost of the row operations required to satisfy the Vertical Condition. The minimum total cost of the column operations required to satisfy the Horizontal Condition. To get the answer, we add the two costs. Let $dp(i, x)$ be the minimum total cost of the row operations required so that: The first $i$ rows of the matrix satisfy the Vertical Condition. If $x = 0$, then we do not perform row operation $i$. If $x = 1$, then we perform row operation $i$. If it is impossible, then $dp(i, x) = \infty$. Our base cases are $dp(1, 0) = 0$ and $dp(1, 1) = a_1$. For $i > 1$, initialize $dp(i, x)$ to $\infty$. Our $dp(i, x)$ will depend on some $dp(i - 1, y)$. Now we need to check if row $(i - 1)$ and row $i$ satisfy the Vertical Condition: The elements in row $i - 1$ are: $(h_{i - 1, 1} + y), (h_{i - 1, 2} + y), \ldots, (h_{i - 1, n} + y)$. The elements in row $i$ are: $(h_{i, 1} + x), (h_{i, 2} + x), \ldots, (h_{i, n} + x)$. If no two vertically adjacent elements are the same, then: For $x = 0$, set $dp(i, x) = \min(dp(i, x), dp(i - 1, y))$. For $x = 1$, set $dp(i, x) = \min(dp(i, x), dp(i - 1, y) + a_i)$. The minimum total cost required so that the entire matrix satisfies the Vertical Condition is $\min(dp(n, 0), dp(n, 1))$. Similarly, we can solve for the Horizontal Condition. The time complexity of this solution is $O(n^2)$. To avoid repetition, we can solve for the Horizontal Condition by transposing the matrix and treating it as the Vertical Condition instead.
|
[
"dp",
"implementation"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
const long long INF = 1e18;
long long solveHor(int n, vector<vector<int>>& h, vector<int>& a) {
vector<vector<long long>> dp(n, vector<long long>(2, INF));
dp[0][0] = 0;
dp[0][1] = a[0];
for (int i = 1; i < n; i++) {
for (int x = 0; x < 2; x++) {
for (int y = 0; y < 2; y++) {
bool ok = true;
for (int j = 0; j < n; j++) {
ok &= (h[i - 1][j] + y != h[i][j] + x);
}
if (ok) {
if (x == 0) {
dp[i][x] = min(dp[i][x], dp[i - 1][y]);
}
if (x == 1) {
dp[i][x] = min(dp[i][x], dp[i - 1][y] + a[i]);
}
}
}
}
}
return min(dp[n - 1][0], dp[n - 1][1]);
}
void transpose(int n, vector<vector<int>>& h) {
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
swap(h[i][j], h[j][i]);
}
}
}
void test() {
int n;
cin >> n;
vector<vector<int>> h(n, vector<int>(n));
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
cin >> h[i][j];
}
}
vector<int> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
vector<int> b(n);
for (int i = 0; i < n; i++) {
cin >> b[i];
}
long long horCost = solveHor(n, h, a);
transpose(n, h);
long long verCost = solveHor(n, h, b);
long long totalCost = horCost + verCost;
if (totalCost >= INF) {
cout << -1 << '\n';
}
else {
cout << totalCost << '\n';
}
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
for (int i = 0; i < t; i++) {
test();
}
return 0;
}
|
2096
|
D
|
Wonderful Lightbulbs
|
You are the proud owner of an infinitely large grid of lightbulbs, represented by a Cartesian coordinate system. Initially, all of the lightbulbs are turned off, except for one lightbulb, where you buried your proudest treasure.
In order to hide your treasure's position, you perform the following operation an arbitrary number of times (possibly zero):
- Choose two \textbf{integer} numbers $x$ and $y$, and switch the state of the $4$ lightbulbs at $(x, y)$, $(x, y + 1)$, $(x + 1, y - 1)$, and $(x + 1, y)$. In other words, for each lightbulb, turn it on if it was off, and turn it off if it was on. Note that there are \textbf{no constraints} on $x$ and $y$.
In the end, there are $n$ lightbulbs turned on at coordinates $(x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)$. Unfortunately, you have already forgotten where you buried your treasure, so now you have to figure out one possible position of the treasure. Good luck!
|
What can we say about the number of lightbulbs that are turned on for any valid configuration in the input? Can the number of lightbulbs that are turned on be even? The number of lightbulbs that are turned on is always odd. This is because we start with exactly one lightbulb turned on. Every operation changes the state of $4$ lightbulbs, which is an even number. So no matter what operations we perform, we will always have an odd number of lightbulbs turned on. Let's use the idea of parity to come up with stricter conditions. Consider the lightbulbs on the vertical line $x = c$. What can we say about the number of lightbulbs that are turned on? Suppose the treasure is buried at position $(s, t)$. If $c = s$, then the line $x = c$ has an odd number of lightbulbs turned on. If $c \neq s$, then the line $x = c$ has an even number of lightbulbs turned on. This is because every operation $(u, v)$ changes the state of $4$ lightbulbs: $(u, v)$, $(u, v + 1)$, $(u + 1, v - 1)$, $(u + 1, v)$. $2$ of them are on the vertical line $x = u$. $2$ of them are on the vertical line $x = u + 1$. From our previous observations, we can uniquely determine the value of $s$. Can we do the same for the value of $t$? Consider the lightbulbs on the diagonal line $x + y = c$. What can we say about the number of lightbulbs that are turned on? Suppose the treasure is buried at position $(s, t)$. If $c = s + t$, then the line $x + y = c$ has an odd number of lightbulbs turned on. If $c \neq s + t$, then the line $x + y = c$ has an even number of lightbulbs turned on. This is because every operation $(u, v)$ changes the state of $4$ lightbulbs: $(u, v)$, $(u, v + 1)$, $(u + 1, v - 1)$, $(u + 1, v)$. $2$ of them are on the diagonal line $x + y = u + v$. $2$ of them are on the diagonal line $x + y = u + v + 1$. So, to summarize, we find two lines: The vertical line that has an odd number of lightbulbs turned on The diagonal line that has an odd number of lightbulbs turned on The intersection of these two lines is the position of the treasure.
|
[
"combinatorics",
"constructive algorithms",
"math"
] | 2,000
|
#include <bits/stdc++.h>
using namespace std;
void test() {
int n;
cin >> n;
map<int, int> cntVer, cntDiag;
for (int i = 0; i < n; i++) {
int x, y;
cin >> x >> y;
cntVer[x]++;
cntDiag[x + y]++;
}
int s;
for (auto [c, cnt]: cntVer) {
if (cnt % 2 == 1) {
s = c;
break;
}
}
int t;
for (auto [c, cnt]: cntDiag) {
if (cnt % 2 == 1) {
t = c - s;
break;
}
}
cout << s << " " << t << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
for (int i = 0; i < t; i++) {
test();
}
return 0;
}
|
2096
|
E
|
Wonderful Teddy Bears
|
You are the proud owner of $n$ teddy bears, which are arranged in a row on a shelf. Each teddy bear is colored either black or pink.
An arrangement of teddy bears is beautiful if all the black teddy bears are to the left of all the pink teddy bears. In other words, there \textbf{does not} exist a pair of indices $(i, j)$ ($1 \leq i < j \leq n$) such that the $i$-th teddy bear is pink, and the $j$-th teddy bear is black.
You want to reorder the teddy bears into a beautiful arrangement. You are too short to reach the shelf, but luckily, you can send instructions to a robot to move the teddy bears around. In a single instruction, the robot can:
- Choose an index $i$ ($1 \le i \le n - 2$) and reorder the teddy bears at positions $i$, $i + 1$ and $i + 2$ so that all the black teddy bears are to the left of all the pink teddy bears.
What is the minimum number of instructions needed to reorder the teddy bears?
|
First, we'll treat all the black teddy bears as $0$ and all the pink teddy bears as $1$. So we are given a binary array of length $n$. In one operation, we can choose three consecutive elements and sort them in ascending order. Now we need to find the minimum number of operations required to sort the array. There are four possible types of operations, depending on the values of the elements: Operation A: $(0, 1, 0) \rightarrow (0, 0, 1)$ Operation B: $(1, 0, 0) \rightarrow (0, 0, 1)$ Operation C: $(1, 0, 1) \rightarrow (0, 1, 1)$ Operation D: $(1, 1, 0) \rightarrow (0, 1, 1)$ Let's think of a greedy solution first. Which operations are better than others? We can evaluate an operation by how much it reduces the number of inversions in the array: Operation A reduces the number of inversions by $1$. Operation B reduces the number of inversions by $2$. Operation C reduces the number of inversions by $1$. Operation D reduces the number of inversions by $2$. Therefore, ideally, we'd only perform operations B and D. Let $x$ be the number of inversions in the original array. Then the answer is at least $\left\lceil \frac{x}{2} \right\rceil$. Suppose we keep performing operations B and D until we can no longer do so. What will the array look like? The array will have the form $[0, 0, \ldots, 0, 0, 1, 0, 1, 0, \ldots 1, 0, 1, 0, 1, 1, \ldots, 1, 1]$. In other words, the array will consist of: A (possibly empty) sequence of consecutive $0$'s; A (possibly empty) sequence of alternating $1$'s and $0$'s; And a (possibly empty) sequence of consecutive $1$'s. Since the array has a sequence of alternating $1$'s and $0$'s, we should think about parity. Let $a$ be the number of $0$'s in the entire array, and $b$ be the number of $0$'s in the even positions. When the array is sorted, all the $0$'s will be at the beginning of the array. So in the end, $b = \left\lfloor \frac{a}{2} \right\rfloor$. Consider how each type of operation changes the value of $b$: Operation A either increases or decreases the value of $b$ by $1$. Operation B does not change the value of $b$. Operation C either increases or decreases the value of $b$ by $1$. Operation D does not change the value of $b$. Therefore, we can only use operations A and C to change the value of $b$. Let $d = \lvert \, \left\lfloor \frac{a}{2} \right\rfloor - b \, \rvert$. Then we need at least $d$ operations of type A or type C. After these $d$ operations, the array will have $(x - d)$ inversions. We can show that $(x - d)$ must be even. Then, we perform $\frac{x - d}{2}$ operations of type B or type D to reduce the number of inversions to $0$. So the answer to the problem is: $d + \frac{x - d}{2} = \frac{x + d}{2}$.
|
[
"greedy",
"implementation",
"sortings"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
void test() {
int n;
cin >> n;
string s;
cin >> s;
long long x = 0;
int a = 0;
int b = 0;
for (int i = n - 1; i >= 0; i--) {
if (s[i] == 'B') {
a++;
if ((i + 1) % 2 == 0) {
b++;
}
}
if (s[i] == 'P') {
x += a;
}
}
int d = abs(a / 2 - b);
cout << (x + d) / 2 << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
for (int i = 0; i < t; i++) {
test();
}
return 0;
}
|
2096
|
F
|
Wonderful Impostors
|
You are a proud live streamer known as Gigi Murin. Today, you will play a game with $n$ viewers numbered $1$ to $n$.
In the game, each player is either a crewmate or an impostor. You don't know the role of each viewer.
There are $m$ statements numbered $1$ to $m$, which are either \textbf{true or false}. For each $i$ from $1$ to $m$, statement $i$ is one of two types:
- $0\:a_i\:b_i$ ($1 \leq a_i \leq b_i \leq n$) — there are no impostors among viewers $a_i, a_i + 1, \ldots, b_i$;
- $1\:a_i\:b_i$ ($1 \leq a_i \leq b_i \leq n$) — there is \textbf{at least} one impostor among viewers $a_i, a_i + 1, \ldots, b_i$.
Answer $q$ questions of the following form:
- $l\:r$ ($1 \leq l \leq r \leq m$) — is it possible that statements $l, l + 1, \ldots, r$ are \textbf{all true}?
Note that it is \textbf{not guaranteed} that there is at least one impostor among all viewers, and it is \textbf{not guaranteed} that there is at least one crewmate among all viewers.
|
Let's call a set of statements satisfiable if it's possible that all of them are true. How do we check if a set of statements is satisfiable? For each statement of the form $0\,a_l\,a_r$, assign all viewers from $a_l$ to $a_r$ as crewmates. Then, assign the rest of the viewers as impostors. Now check if all statements of the form $1\,b_l\,b_r$ are true. When a segment of statements $[s_l, s_r]$ is satisfiable, what can we say about other segments of statements? If $[s_l, s_r]$ is satisfiable, then $[s_l + 1, s_r]$ and $[s_l, s_r - 1]$ are also satisfiable. We'll use the two pointers method. For each $s_r$, let $low(s_r)$ be the smallest $s_l$ such that $[s_l, s_r]$ is satisfiable. To answer a question $s_l\,s_r$, we just check that $s_l \geq low(s_r)$. When we increment $s_r$, we will add statement $(s_r + 1)$. So we must increment $s_l$ and remove statements from the beginning of the segment until $[s_l, s_r + 1]$ is satisfiable. Now we need to efficiently check if we can add a new statement to our current set of statements. Let's rephrase how to determine if a set of statements is satisfiable. For statements of the form $0\,a_l\,a_r$, we'll call $[a_l, a_r]$ a $0$-segment. For statements of the form $0\,a_l\,a_r$, we'll call $[a_l, a_r]$ a $0$-segment. Similarly, for statements of the form $1\,b_l\,b_r$, we'll call $[b_l, b_r]$ a $1$-segment. Similarly, for statements of the form $1\,b_l\,b_r$, we'll call $[b_l, b_r]$ a $1$-segment. When $0$-segments overlap and form a larger segment, we'll call it a $0$-component. When $0$-segments overlap and form a larger segment, we'll call it a $0$-component. Then, a set of statements is satisfiable if there does not exist a $1$-segment that is fully covered by a $0$-component. When we add a $1$-segment $[b_l, b_r]$, we need to check if it will be fully covered by a $0$-component. For each $i$ from $1$ to $n$, let $count(i)$ be the number of $0$-segments that contain viewer $i$. Then, we can add $[b_l, b_r]$ if $\min(count(b_l), count(b_l + 1), \ldots, count(b_r)) = 0$. The values of $count(i)$ can be maintained using a segment tree. When we add a $0$-segment $[a_l, a_r]$, it might merge several $0$-components. So first we need to find the $0$-component $[c_l, c_r]$ that contains $[a_l, a_r]$: $c_l$ is the smallest value such that $\min(count(c_l), count(c_l + 1), \ldots, count(a_l)) > 0$. $c_r$ is the largest value such that $\min(count(a_r), count(a_r + 1), \ldots, count(c_r)) > 0$. Both of these values can be found by performing a walk on the segment tree. Now we need to check if there's a $1$-segment that's fully covered by $[c_l, c_r]$. Sort all the $1$-segments in the input by ascending value of $b_r$. Then, for all segments that satisfy $b_r \leq c_r$, find the one with largest value of $b_l$. If $b_l \geq c_l$, then $[b_l, b_r]$ is fully covered by $[c_l, c_r]$. We can maintain another segment tree with prefix-max queries.
|
[
"data structures",
"implementation",
"two pointers"
] | 3,100
| null |
2096
|
G
|
Wonderful Guessing Game
|
\textbf{This is an interactive problem.}
You are a proud teacher at the Millennium Science School. Today, a student named Alice challenges you to a guessing game.
Alice is thinking of an integer from $1$ to $n$, and you must guess it by asking her some queries.
To make things harder, she says you must \textbf{ask all the queries first}, and she will \textbf{ignore} exactly $1$ query.
For each query, you choose an array of $k$ \textbf{distinct} integers from $1$ to $n$, where $k$ is even. Then, Alice will respond with one of the following:
- $L$: the number is one of the first $\frac{k}{2}$ elements of the array;
- $R$: the number is one of the last $\frac{k}{2}$ elements of the array;
- $N$: the number is not in the array;
- $?$: this query is ignored.
Alice is impatient, so you must find a strategy that \textbf{minimizes} the number of queries. Can you do it?
Formally, let $f(n)$ be the minimum number of queries required to determine Alice's number. Then you must find a strategy that uses \textbf{exactly} $f(n)$ queries.
Note that the interactor is \textbf{adaptive}, which means Alice's number is not fixed at the beginning and may depend on your queries. However, it is guaranteed that there exists at least one number that is consistent with Alice's responses.
We can show that $f(n) \leq 20$ for all $n$ such that $2 \le n \le 2 \cdot 10^5$.
|
First, try solving it if Alice doesn't ignore any queries. We can represent the queries using a table. Let's look at the second test case in the example: Each row is a query. Column $x$ contains Alice's responses if her number was $x$. Let's treat $\texttt{L}$ as $-1$, $\texttt{R}$ as $1$, and $\texttt{N}$ as $0$. Then, our strategy works if each row has a sum of $0$ and all $n$ columns are distinct. Since there are only $3$ possible values, we need at least $q = \left\lceil \log_3(n) \right\rceil$ queries. In fact, this lower bound can be achieved. We generate all $3^{q}$ possible columns: $[x_1, x_2, \ldots, x_q]$, where $x_i \in$ $\text{\{}$$-1, 0, 1$$\text{\}}$. Now we need to choose $n$ of them so that they sum to $0$. To do this, we can choose one pair of columns at a time: First, we choose $[x_1, x_2, \ldots, x_q]$; Then, we choose $[-x_1, -x_2, \ldots, -x_q]$. We see that each pair of columns sums to $0$. If $n$ is odd, we can include $[0, 0, \ldots, 0]$. Now let's try to solve the original problem. For our strategy to work, it must satisfy the following: Each row has a sum of $0$. No two columns are the same. No two columns differ by exactly $1$ element. To achieve this, we only need $1$ additional query. Here's another way to think about it: For each column, no matter which element is ignored, we should be able to uniquely determine the missing value. We use the same solution as before, but with $1$ more element: For each column $[x_1, x_2, \ldots, x_q]$, add an element $x_{q + 1}$ such that $(x_1 + x_2 + \ldots + x_{q + 1})\mod 3 = 0$. Because the sum of each column is $0$ modulo $3$, we can uniquely recover any missing element.
|
[
"bitmasks",
"constructive algorithms",
"interactive"
] | 3,200
| null |
2096
|
H
|
Wonderful XOR Problem
|
You are the proud... never mind, just solve this problem.
There are $n$ intervals $[l_1, r_1], [l_2, r_2], \ldots [l_n, r_n]$. For each $x$ from $0$ to $2^m - 1$, find the number, modulo $998\,244\,353$, of sequences $a_1, a_2, \ldots a_n$ such that:
- $l_i \leq a_i \leq r_i$ for all $i$ from $1$ to $n$;
- $a_1 \oplus a_2 \oplus \ldots \oplus a_n = x$, where $\oplus$ denotes the bitwise XOR operator.
|
Is this FFT? Let's consider the XOR convolution. Given two polynomials $A(x)$ and $B(x)$ with degree at most $(2^{m} - 1)$, the XOR convolution $C(x) = A(x) \star B(x)$ is defined as follows: $\displaystyle [x^k]C(x) = \sum_{0 \leq i < 2^m \\ 0 \leq j < 2^m \\ i \oplus j = k} [x^i]A(x) \cdot [x^j]B(x)$ To efficiently compute the XOR convolution, we can use FWHT. First, let $s(k, i) = (-1)^{\text{popcount}(k\,\&\,i)}$. Then, $F(A(x))$ is defined as follows: $\displaystyle [x^k]F(A(x)) = \sum_{0 \leq i < 2^m} [x^i]A(x) \cdot s(k, i)$ Given $A^{\,\prime}(x) = F(A(x))$, we can uniquely determine $A(x) = F^{-1}(A^{\,\prime}(x))$ using the inverse of FWHT. To compute $F(A(x))$ and $F^{-1}(A^{\,\prime}(x))$, we can use SOS DP. Define the dot product $D(x) = A(x) \cdot B(x)$ as follows: $\displaystyle [x^k]D(x) = [x^k]A(x) \cdot [x^k]B(x)$ We can show that $F(A(x) \star B(x)) = F(A(x)) \cdot F(B(x))$ for any two polynomials $A(x)$ and $B(x)$. Let's solve the problem. For each interval $[l_i, r_i]$, let $A_i(x) = x^{l_i} + x^{l_i + 1} + \ldots + x^{r_i}$. We want to find $A_1(x) \star A_2(x) \star \ldots \star A_n(x)$. To do this, we'll compute $F^{-1}(F(A_1(x)) \cdot F(A_2(x)) \cdot \ldots \cdot F(A_n(x)))$. We can note that $\displaystyle [x^k]F(A_i(x)) = \sum_{l_i \leq j \leq r_i} s(k, j)$ Let $f(k, r) = \displaystyle \sum_{0 \leq j \leq r} s(k, j)$. Then $\displaystyle [x^k]F(A_i(x)) = f(k, r_i) - f(k, l_i - 1)$. Now we need to find a way to compute $f(k, r)$ efficiently. Let $p$ be the ($0$-indexed) position of the least significant bit of $k$. Let $c = \left\lfloor \frac{r}{2^{p + 1}} \right\rfloor$. Then $f(k, r) = \displaystyle f(k, c \cdot 2^{p+1} - 1) + \sum_{c \cdot 2^{p + 1} \leq j \leq r} s(k, j)$ Due to cancellation of terms, we have $f(k, c \cdot 2^{p+1} - 1) = 0$. We can also see that $s(k, j) = s(2^p, j) \cdot s\left(\left\lfloor \frac{k}{2^{p + 1}} \right\rfloor, \left\lfloor \frac{j}{2^{p + 1}} \right\rfloor\right) = s(2^p, j) \cdot s\left(\left\lfloor \frac{k}{2^{p + 1}} \right\rfloor, c\right)$ Therefore, $\displaystyle f(k, r) = \left(\sum_{c \cdot 2^{p + 1} \leq j \leq r} s(2^p, j)\right) \cdot s\left(\left\lfloor \frac{k}{2^{p + 1}} \right\rfloor, c\right)$. Returning to $F(A_i(x))$, we get: $\displaystyle [x^k]F(A_i(x)) = a_i \cdot s(k^{\,\prime}, c_i) + b_i \cdot s(k^{\,\prime}, d_i)$ where: $a_i$ and $b_i$ are constants independent of $k$. $k^{\,\prime} = \left\lfloor \frac{k}{2^{p + 1}} \right\rfloor$, $c_i = \left\lfloor \frac{r_i}{2^{p + 1}} \right\rfloor$, and $d_i = \left\lfloor \frac{l_i - 1}{2^{p + 1}} \right\rfloor$. But wait! This is just $[x^{k^{\,\prime}}]F(a_i \cdot x^{c_i} + b_i \cdot x^{d_i})$. So we'll let $B_i(x) = a_i \cdot x^{c_i} + b_i \cdot x^{d_i}$. Therefore, $[x^k](F(A_1(x)) \cdot F(A_2(x)) \cdot \ldots \cdot F(A_n(x))) = [x^{k^{\,\prime}}]F(B_1(x) \star B_2(x) \star \ldots \star B_n(x))$ Now we need to find a way to compute $F(B_1(x) \star B_2(x) \star \ldots \star B_n(x))$ efficiently. First, we can normalize each polynomial: $a \cdot x^c + b \cdot x^d = (a + b \cdot x^{c\,\oplus\,d}) \star x^c$ Now all polynomials have the form $a_i + b_i \cdot x^{c_i}$. Next, we can convolve polynomials with matching powers of $x$: $(a_1 + b_1 \cdot x^c) \star (a_2 + b_2 \cdot x^c) = (a_1 a_2 + b_1 b_2) + (a_1 b_2 + b_1 a_2) \cdot x^c$ Now the expression is of the form: $F((a_0 + b_0 \cdot x^0) \star (a_1 + b_1 \cdot x^1) \star \ldots \star (a_{2^m - 1} + b_{2^m - 1} \cdot x^{2^m - 1})) = F(B_0) \cdot F(B_1) \cdot \ldots \cdot F(B_{2^m - 1})$ Finally, we have: $\displaystyle [x^k] (F(B_0) \cdot F(B_1) \cdot \ldots \cdot F(B_{2^m - 1})) = \prod_{i = 0}^{2^m - 1} (a_i + b_i \cdot s(k, i))$ This can be computed using SOS DP.
|
[
"bitmasks",
"combinatorics",
"dp",
"fft",
"math"
] | 3,200
| null |
2097
|
A
|
Sports Betting
|
The boarding process for various flights can occur in different ways: either by \textbf{bus} or through a \textbf{telescopic jet bridge}. Every day, exactly one flight is made from St. Petersburg to Minsk, and Vadim decided to demonstrate to the students that he always knows in advance how the boarding will take place.
Vadim made a bet with $n$ students, and with the $i$-th student, he made a bet on day $a_i$. Vadim wins the bet if he correctly predicts the boarding process on both day $a_i+1$ and day $a_i+2$.
Although Vadim does not know in advance how the boarding will occur, he really wants to win the bet \textbf{at least} with one student and convince him of his predictive abilities. Check if there exists a strategy for Vadim that allows him to \textbf{guarantee} success.
|
Let's sort the array $a$ in non-decreasing order: $a_1 \le a_2 \le \ldots \le a_n$. Suppose Vadim argues with the students in turn. We can formalize the problem as follows: since there are only two possible seating arrangements each day, we can encode them with the digits $0$ and $1$. Each of Vadim's predictions can be represented by a pair of numbers $(c_i, d_i)$-the seating arrangement for days $a_i+1$ and $a_i+2$, respectively. Vadim will be able to win in the following cases: If there exists a quadruple of students $a_i = a_{i+1} = a_{i+2} = a_{i+3}$, meaning those students he will argue with on the same day. In this case, he can provide all possible $4$ predictions for days $a_i+1$ and $a_i+2$ and will guarantee at least one correct prediction. If there exists a pair of indices $i < j$ such that $a_i = a_{i+1} < a_j = a_{j+1}$ and for each $x \in \{ a_i+1, a_i+2, \ldots, a_j-1 \}$ there exists a $k$ such that $a_k = x$. Suppose on day $x$ and day $y$ ($x < y$) Vadim argues with at least two students, and on days $x+1,\ldots,y-1$ with at least one student. On day $x$, Vadim makes predictions $(0,1)$ and $(1,1)$. On each of the days $x+1,\ldots,y-1$, Vadim makes predictions $(0,1)$. On day $y$, Vadim makes predictions $(0,0)$ and $(0,1)$. Suppose on day $x$ and day $y$ ($x < y$) Vadim argues with at least two students, and on days $x+1,\ldots,y-1$ with at least one student. On day $x$, Vadim makes predictions $(0,1)$ and $(1,1)$. On each of the days $x+1,\ldots,y-1$, Vadim makes predictions $(0,1)$. On day $y$, Vadim makes predictions $(0,0)$ and $(0,1)$. To prove the correctness of Vadim's strategy, consider the string $s_{x+1} \ldots s_{y+1} s_{y+2}$-the seating arrangements from day $x$ to day $y$. If $s_{x+1} \neq 0$ or $s_{y+1} \neq 1$, then Vadim can definitively convince at least one student on day $x$ or day $y$. Otherwise, if $s_{x+1}=0$ and $s_{y+1}=1$, then there exists a day $z$ ($x \le z < y$) such that $s_{z+1}=0$ and $s_{z+2}=1$, which means Vadim can convince a student on day $z$. It can be shown that in all other cases, Vadim loses. To demonstrate this, we need to fix an arbitrary strategy of Vadim's and try to find a counterexample to his strategy. The general plan for the proof is as follows: If $a_i + 2 \le a_i$, then the segments of students $[1, i]$ and $[i+1, n]$ can be considered independently. Let's consider a block of students with whom Vadim argues without breaks. If $a_1 < a_2$ or $a_{n-1} < a_n$, then the first or, respectively, the last student can be easily discarded without affecting the other students. Otherwise, we have $a_1 = a_2 \le \ldots \le a_{n-1} = a_n$, which is the only case where Vadim has no winning strategy according to the previous point, when $n \le 3$.
|
[
"2-sat",
"brute force",
"math",
"sortings"
] | 1,400
| null |
2097
|
B
|
Baggage Claim
|
Every airport has a baggage claim area, and Balbesovo Airport is no exception. At some point, one of the administrators at Sheremetyevo came up with an unusual idea: to change the traditional shape of the baggage claim conveyor from a carousel to a more complex form.
Suppose that the baggage claim area is represented as a rectangular grid of size $n \times m$. The administration proposed that the path of the conveyor should pass through the cells $p_1, p_2, \ldots, p_{2k+1}$, where $p_i = (x_i, y_i)$.
For each cell $p_i$ and the next cell $p_{i+1}$ (where $1 \leq i \leq 2k$), these cells must share a common side. Additionally, the path must be simple, meaning that for no pair of indices $i \neq j$ should the cells $p_i$ and $p_j$ coincide.
Unfortunately, the route plan was accidentally spoiled by spilled coffee, and only the cells with odd indices of the path were preserved: $p_1, p_3, p_5, \ldots, p_{2k+1}$. Your task is to determine the number of ways to restore the original complete path $p_1, p_2, \ldots, p_{2k+1}$ given these $k+1$ cells.
Since the answer can be very large, output it modulo $10^9+7$.
|
If any pair of adjacent cells $p_{2i-1}$ and $p_{2i+1}$ is at a distance other than $2$, then the answer is $0$. Otherwise, for each such pair of cells, there are two possible cases: Cells $p_{2i-1}$ and $p_{2i+1}$ are in the same row or column. Then cell $p_{2i}$ must be located between them. In the other case, there are two possible positions for cell $p_{2i}$. We will construct a graph where the vertices are all the cells of the field, and edges are created for each even cell $p_2, \ldots, p_{2k}$. If cell $p_{2i}$ can only be in position $\alpha$, we draw a self-loop at vertex $\alpha$. If cell $p_{2i}$ can be in position $\alpha$ or $\beta$, we draw an edge connecting these two vertices. Thus, we need to choose an incident vertex for each edge such that each vertex is chosen at most once. It is clear that this problem can be solved independently for each connected component, and then we multiply the answers for all components: If a component with $s$ vertices contains more than $s$ edges, then the answer is $0$ (Dirichlet's principle). If a component with $s$ vertices has exactly $s$ edges, then the component contains exactly one cycle. If this cycle is a loop, then the answer is $1$. If it is a non-degenerate cycle, then the answer is $2$. If this cycle is a loop, then the answer is $1$. If it is a non-degenerate cycle, then the answer is $2$. If the component with $s$ vertices is a tree, then the answer is $s$ (we choose an unchosen vertex, and then everything is determined).
|
[
"combinatorics",
"dfs and similar",
"dp",
"dsu",
"graphs",
"implementation",
"math",
"trees"
] | 2,300
| null |
2097
|
C
|
Bermuda Triangle
|
The Bermuda Triangle — a mysterious area in the Atlantic Ocean where, according to rumors, ships and airplanes disappear without a trace. Some blame magnetic anomalies, others — portals to other worlds, but the truth remains hidden in a fog of mysteries.
A regular passenger flight 814 was traveling from Miami to Nassau on a clear sunny day. Nothing foreshadowed trouble until the plane entered a zone of strange flickering fog. Radio communication was interrupted, the instruments spun wildly, and flashes of unearthly light flickered outside the windows.
For simplicity, we will assume that the Bermuda Triangle and the airplane are on a plane, and the vertices of the triangle have coordinates $(0, 0)$, $(0, n)$, and $(n, 0)$. Initially, the airplane is located at the point $(x, y)$ \textbf{strictly inside} the Bermuda Triangle and is moving with a velocity vector $(v_x, v_y)$. All instruments have failed, so the crew cannot control the airplane.
The airplane can escape from the triangle if it ever reaches exactly one of the vertices of the triangle. However, if at any moment (possibly non-integer) the airplane hits the boundary of the triangle (but not at a vertex), its velocity vector is immediately reflected relative to that side$^\dagger$, and the airplane continues to move in the new direction.
Determine whether the airplane can ever escape from the Bermuda Triangle (i.e., reach exactly one of its vertices). If this is possible, also calculate how many times before that moment the airplane will hit the boundary of the triangle (each touch of the boundary, even at the same point, counts; crossing a vertex does not count).
$^\dagger$ Reflection occurs according to the usual laws of physics. The angle of incidence equals the angle of reflection.
|
We will use the idea of reflections. Instead of saying that the plane is reflected with respect to a side, we will say that it actually flew further but ended up in the triangle that results from reflecting the original triangle with respect to that side. All such triangles form a pattern of the following kind. The vertices of the triangles are located at all possible points of the form $(nx, ny)$ ($x, y \in \mathbb{Z}$). Accordingly, it is easy to see that we can reduce the velocity vector such that $\textrm{gcd}(v_x, v_y) = 1$. Then, to determine the time it will take for the plane to exit the triangle, we use the Chinese remainder theorem: $v_x t + x \equiv 0 \pmod{n}$ $v_y t + y \equiv 0 \pmod{n}$ Accordingly, we can compute such a minimal non-negative suitable $t$. Next, we need to calculate the number of reflections. Essentially, this is the number of times the segment $(x, y)$, $(v_x t + x, v_y t + y)$ intersects the lines in the figure above. If the endpoint is $(t_x \cdot n, t_y \cdot n)$, then the number of intersections is as follows: the number of intersections with vertical segments is $t_x - 1$, with horizontal segments is $t_y - 1$, the number of lines parallel to the hypotenuse of the original triangle is $\lfloor \frac{1}{2}[t_x+t_y] \rfloor$, and the number of lines perpendicular to the original hypotenuse is $\lfloor \frac{1}{2} |t_x-t_y| \rfloor$.
|
[
"chinese remainder theorem",
"geometry",
"implementation",
"math",
"number theory"
] | 2,400
| null |
2097
|
D
|
Homework
|
Some teachers work at the educational center "Sirius" while simultaneously studying at the university. In this case, the trip does not exempt them from completing their homework, so they do their homework right on the plane. Artem is one of those teachers, and he was assigned the following homework at the university.
With an arbitrary string $a$ of \textbf{even} length $m$, he can perform the following operation. Artem splits the string $a$ into two halves $x$ and $y$ of equal length, after which he performs \textbf{exactly one} of three actions:
- For each $i \in \left\{ 1, 2, \ldots, \frac{m}{2}\right\}$ assign $x_i = (x_i + y_i) \bmod 2$;
- For each $i \in \left\{ 1, 2, \ldots, \frac{m}{2}\right\}$ assign $y_i = (x_i + y_i) \bmod 2$;
- Perform an arbitrary number of operations (the same operations defined above, applied recursively) on the strings $x$ and $y$, independently of each other. Note that in this case, the strings $x$ and $y$ must be of even length.
After that, the string $a$ is replaced by the strings $x$ and $y$, concatenated in the same order.Unfortunately, Artem fell asleep on the plane, so you will have to complete his homework. Artem has two binary strings $s$ and $t$ of length $n$, each consisting of $n$ characters 0 or 1. Determine whether it is possible to make string $s$ equal to string $t$ with \textbf{an arbitrary} number of operations.
|
The problem itself is trivial, but solving it requires not skipping lectures on linear algebra. Suspecting that not all participants are familiar with the basics of linear algebra, we will try to explain each step in detail. Part 1. Matrices. Let $n = k \cdot m$, where $m$ is an odd number, and $k$ is a power of two. Any binary string $s$ of length $n$ can be represented as a matrix $M(s)$: $M(s) = \begin{pmatrix} s_1 & s_2 & \cdots & s_m \\ s_{m+1} & s_{m+2} & \cdots & s_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ s_{n-m+1} & s_{n-m+2} & \cdots & s_{n} \end{pmatrix}$ It is easy to prove that any operation $\mu$ (one move, several moves, or none) that transforms string $s$ into string $t$ can be expressed as $\hat{\mu} \cdot M(s) = M(t)$, where $\hat{\mu}$ is a matrix in $\mathbb{Z}_2^{k \times k}$. Indeed: If $k=1$, the only possible operation $\mu$ does nothing, and $\hat{\mu} = I_n$ (the identity matrix). If the operation $\mu$ adds the second half of $s$ to the first half, then $\hat{\mu}$ can be written in block form: $\hat{\mu} = \begin{pmatrix} I & I \\ I & 0 \end{pmatrix}$ $\hat{\mu} = \begin{pmatrix} I & I \\ I & 0 \end{pmatrix}$ If the operation $\mu$ adds the first half to the second half, then $\hat{\mu}$ can also be written in block form. If $\mu$ independently changes the left and right halves of $s$ using operations $\mu_1$ and $\mu_2$, then $\hat{\mu}$ has the form: $\hat{\mu} = \begin{pmatrix} \hat{\mu}_1 & 0 \\ 0 & \hat{\mu}_2 \end{pmatrix}$ $\hat{\mu} = \begin{pmatrix} \hat{\mu}_1 & 0 \\ 0 & \hat{\mu}_2 \end{pmatrix}$ If $\mu$ is a combination of operations $\mu_1, \ldots, \mu_l$, then $\hat{\mu} = \hat{\mu_l} \ldots \hat{\mu_1}$. Part 2. Torment and Suffering. Proposition. For any invertible matrix $A \in \textrm{GL}(\mathbb{Z}_2^k)$, there exists an operation $\mu$ such that $\hat{\mu} = A$. Proof. The case $k=1$ is trivial; for $k=2$, we can explicitly provide expressions for elementary transformations. Adding one row to another is trivial, and swapping them can be done as follows: $\begin{pmatrix}x\\y\end{pmatrix} \to \begin{pmatrix}x+y\\y\end{pmatrix} \to \begin{pmatrix}x+y\\x+2y\end{pmatrix} = \begin{pmatrix}x+y\\x\end{pmatrix} \to \begin{pmatrix}y\\x\end{pmatrix}$ Comment. Yes, this is the implementation of std::swap using the XOR operation. Now consider the case $k > 2$. By the induction hypothesis, we can apply any invertible transformation to the halves of string $s$ at any moment. Let $x_1, \ldots, x_t$ be the first $t = \frac{k}{2}$ rows of the matrix $M(s)$, and $y_1, \ldots, y_t$ be the second half of the rows. We will show that we can add row $y_1$ to row $x_1$ using the specified operations: $\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_t \\ y_1 \\ y_2 \\ \vdots \\ y_t \end{pmatrix} \to \begin{pmatrix} x_1 + x_2 \\ x_1 \\ \vdots \\ x_t \\ y_1 \\ y_2 \\ \vdots \\ y_t \end{pmatrix} \to \begin{pmatrix} x_1 + x_2 \\ x_1 \\ \vdots \\ x_t \\ y_1 + x_1 + x_2 \\ y_2 + x_1 \\ \vdots \\ y_t + x_t \end{pmatrix} \to \begin{pmatrix} x_1 + x_2 \\ x_1 \\ \vdots \\ x_t \\ y_1 + y_2 + x_2 \\ y_2 + x_1 \\ \vdots \\ y_t + x_t \end{pmatrix} \to \begin{pmatrix} x_1 + x_2 \\ x_1 \\ \vdots \\ x_t \\ y_1 + y_2 + x_1 \\ y_2 \\ \vdots \\ y_t \end{pmatrix} \to \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_t \\ y_1 + x_1 \\ y_2 \\ \vdots \\ y_t \end{pmatrix}$ Then, to add any row from the second half to the first, we should shuffle the rows in the first and second halves so that the required rows are at the top, perform the specified sequence of actions, and then restore the original order of the rows. Similarly, a row from the first half can be added to the second half. Thus, any row can be added to any other. And since swapping two rows is realized through this operation, and considering that matrices live in $\mathbb{Z}_2$, this means that any elementary operation can be represented in terms of the operations described. Since any invertible matrix $A$ can be represented as a composition of elementary operations, it can be represented as a composition of some operations $\mu$. To summarize, this proof turned out to be a bit more unpleasant than we expected. A good concept here would be to feel the idea of this proof instead of getting bogged down in every technical transition. Part 3. A Cute Algorithm. To check whether string $s$ can be transformed into string $t$, based on the above proof, it is sufficient to check whether there exists an invertible matrix $A$ such that $A \cdot M(s) = M(t)$. We apply the Gaussian elimination algorithm to $M(s)$ and $M(t)$, which uses row operations to bring $M(s)$ and $M(t)$ to reduced row echelon form $G(s)$ and $G(t)$, respectively. Then the answer to the problem is "Yes" if and only if $G(s) = G(t)$, which follows from the uniqueness of the reduced row echelon form of a matrix. The Gaussian elimination algorithm for a matrix $M$ of size $m \times k$ runs in time: $O(m k \cdot \textrm{rnk} \, B) = O(m k \cdot \min \{m, k\}) = O(n \sqrt n)$ Since we are working with bit matrices, using std::bitset, the algorithm can be improved to run in time $O\left(\frac{n \sqrt n}{w}\right)$; however, the solution without bitsets also runs in an acceptable time despite the large constraints in the problem.
|
[
"bitmasks",
"math",
"matrices"
] | 2,800
| null |
2097
|
E
|
Clearing the Snowdrift
|
Boy Vasya loves to travel very much. In particular, flying in airplanes brings him extraordinary pleasure. He was about to fly to another city, but the runway was heavily covered with snow and needed to be cleared.
The runway can be represented as $n$ consecutive sections numbered from $1$ to $n$. The snowstorm was quite strong, but it has already stopped, so Vasya managed to calculate that the $i$-th section is covered with $a_i$ meters of snow. For such situations, the airport has a snowplow that works in a rather unusual way. In one minute, the snowplow can do the following:
- Choose a consecutive segment of length no more than $d$ and remove one meter of snow from the most snow-covered sections.Formally, one can choose $1 \le l \le r \le n$ ($r - l + 1 \le d$). After that, $c = \max \{ a_l, a_{l + 1}, \ldots , a_r \}$ is calculated, and if $c > 0$, then for all $i \colon l \le i \le r$ such that $a_i = c$, the value of $a_i$ is decreased by one.
Vasya has been preparing for the flight for a long time and wants to understand how much time he has left to wait until all sections are completely cleared of snow. In other words, it is required to calculate the minimum number of minutes that the snowplow will need to achieve $a_i = 0$ for all $i$ from $1$ to $n$.
|
We will assign to each operation the maximum $a_i$ on the segment of that operation. Consider some order of operations that zeroes out the entire array. If there are two adjacent operations in which the maximum on the segment of the earlier operation is smaller, then we can swap these two operations without changing anything (we leave this simple case analysis as an exercise for the reader). Thus, from any sequence, we can obtain a sequence of operations of the same length where the assigned maximums on the segments of the operations do not increase during the process. Now let $C$ denote the maximum element in the array. Then one of the optimal sequences of operations will first decrease by one all numbers equal to $C$, then all numbers equal to $C - 1$, and so on. How can we calculate the minimum number of operations needed to decrease all numbers equal to the current maximum in the array by one? We will look at the positions of these maximums (let's denote this set of positions as $S$); they need to be covered by the minimum number of segments of length no more than $d$. This is a standard problem that can be solved using a greedy algorithm: Take the leftmost uncovered point $x \in S$, place a segment $[x, \min(x + d - 1, n)]$, Remove all points from $S \cap [x, \min(x + d - 1, n)]$, Repeat until $S$ is empty. Now we can solve the problem in $O(n \cdot C)$, but that is slow. Let's denote the sorted set of numbers in the array as $x_1 < x_2 \ldots < x_k$. We will also introduce $x_0 = 0$. While we are decreasing the maximums from $x_i$ to $x_{i-1}$, the set $S$ does not change, meaning the greedy process is the same and always selects the same number of segments. Thus, it is sufficient to run the greedy process once. We obtain a solution in $O(n^2)$. Now let's optimize this solution. We only need to find the number of segments in the greedy process for the sets $S_j = \{i : a_i \geq x_j\}, 0 \leq j \leq k$. Let's simulate this process simultaneously from left to right. Consider $a_1$. Let $a_1 = x_i$. Then for all sets numbered from $0$ to $i$, a segment of length $d$ starts at this element. Let's remember this and say that these sets will return to consideration when we reach the processing of $a_{d+1}$, because up to that point the elements are covered by the segment. We will continue doing this. We take all sets with numbers less than or equal to $j$ (where the current element of the array is equal to $x_j$), for which the segments have already ended, update the answer, and say that these sets will finish their segment after $d$ processed elements in the scanline. These are some operations with sets that can be maintained in a segment tree, which essentially stores the indices of the sets included in the segment tree, meaning only the paths to the leaves corresponding to the numbers of these sets are stored (somewhat similar to an implicit segment tree). The asymptotic complexity of the solution is $O(n \cdot \log(n))$. Similar operations can be maintained in a Cartesian tree, but that would lead to a larger constant and $O(n \cdot \log^2(n))$, which should not pass the time limits. For more details, you can read the author's solution. There is also a solution using link-cut trees, which is simpler to understand but requires knowledge of this structure, so it was not intended as the author's solution.
|
[
"data structures",
"dfs and similar",
"dp",
"greedy"
] | 3,100
| null |
2097
|
F
|
Lost Luggage
|
As is known, the airline "Trouble" often loses luggage, and concerned journalists decided to calculate the maximum number of luggage pieces that may not return to travelers.
The airline "Trouble" operates flights between $n$ airports, numbered from $1$ to $n$. The journalists' experiment will last for $m$ days. It is known that at midnight before the first day of the experiment, there were $s_j$ lost pieces of luggage in the $j$-th airport. On the $i$-th day, the following occurs:
- \textbf{In the morning}, $2n$ flights take off simultaneously, including $n$ flights of the first type and $n$ flights of the second type.
- The $j$-th flight of the first type flies from airport $j$ to airport $(((j-2) \bmod n )+ 1)$ (the previous airport, with the first airport being the last), and it can carry no more than $a_{i,j}$ lost pieces of luggage.
- The $j$-th flight of the second type flies from airport $j$ to airport $((j \bmod n) + 1)$ (the next airport, with the last airport being the first), and it can carry no more than $c_{i,j}$ lost pieces of luggage.
- \textbf{In the afternoon}, a check of lost luggage is conducted at the airports. If after the flights have departed on that day, there are $x$ pieces of luggage remaining in the $j$-th airport and $x \ge b_{i, j}$, then at least $x - b_{i, j}$ pieces of luggage are found, and they \textbf{cease to be lost}.
- \textbf{In the evening}, all $2n$ flights conclude, and the lost luggage transported that day arrives at the corresponding airports.
For each $k$ from $1$ to $m$, the journalists want to know the maximum number of lost pieces of luggage that may be \textbf{unfound} during the checks over the first $k$ days. Note that for each $k$, these values are calculated independently.
|
It is clear that the problem can be solved using flows. We will introduce a dummy source $s$ and $(m+1) n$ vertices $(i,j)$ ($0 \le i \le m$, $1 \le j \le n$), and the following edges will be added to the network: For all $j$ ($1 \le j \le n$), edges from $s$ to $(0, j)$ with capacity $s_j$. For all pairs $i$, $j$ ($1 \le i \le m$, $1 \le j \le n$), edges from $(i-1,j)$ to $(i,(j-2)\bmod n + 1)$ with capacity $a_{i,j}$. For all pairs $i$, $j$ ($1 \le i \le m$, $1 \le j \le n$), edges from $(i-1,j)$ to $(i,j)$ with capacity $b_{i,j}$. For all pairs $i$, $j$ ($1 \le i \le m$, $1 \le j \le n$), edges from $(i-1,j)$ to $(i,j \bmod n + 1)$ with capacity $c_{i,j}$. To each layer $i$ ($1 \le i \le m$), we will connect edges to the sink $t_i$ (for all $j$, edges from $(i, j)$ to $t_i$ will have capacity $\infty$). Thus, the answer to the problem is the values of the maximum flows from $s$ to $t_1, t_2, \ldots, t_m$, respectively. Unfortunately for the participants, the maximum flows $f_1, \ldots, f_m$ ($f_i$ is the maximum flow from $s$ to $t_i$) can differ significantly from each other. This causes algorithms based on the Ford-Fulkerson method to work unjustifiably slowly, specifically in time $\Omega(m^2 n^3)$. If we are wrong, and you managed to squeeze your solution, please share it in the comments :) The values of the maximum flows can be found using the Ford-Fulkerson theorem with the help of dynamic programming. Specifically, let $\textrm{dp}[k][\textrm{msk}]$ be the minimum $ST$-cut on the subgraph $s \cup\{ (i,j) \}_{i,j=0,1}^{k,n}$, where the vertex $(k,j)$ is in the $T$ part of the cut if and only if the $j$-th bit of the mask $\textrm{msk}$ is equal to $1$. This dynamic can be easily computed in time $O(m \cdot 4^n)$, but again, this is too slow. The author's solution is to speed up the recalculation of the next layer of dynamics to time $O(2^n \cdot n)$. Let $\textrm{prv} = \textrm{dp}[i-1]$ be the already computed layer of dynamics, and $\textrm{nxt} = \textrm{dp}[i]$ be the layer of dynamics that we want to compute. Instead of indexing by the mask in the forms, we will index these arrays by sets of vertices of the corresponding layer that are in the $T$ part of the cut. $\textrm{nxt}[U] = \min_{W} \left\{ \textrm{prv}[W] + \sum_{i=1}^n \sum_{j=-1}^1 [i+j \not \in W] \cdot [i \in U] \cdot e(i, j) \right\}$ A couple of clarifications: $e(i, j)$ is simply the weights of the edges between layers $i-1$ and $i$ in a convenient order. It is very difficult for me to keep track of the indices accurately, and this is just a technical detail, so reconstruct how to form $e(i,j)$ from the arrays $a_i$, $b_i$, and $c_i$ from the context. $i + j \not \in W$ actually means $(i+j-1)\bmod n + 1 \not \in W$. Here $[ P ]$ is the Iverson bracket. $[P]=1$ when the predicate $P$ is satisfied, and $[P]=0$ otherwise. The main difficulty of the recalculation lies in the fact that the set $U$ determines which summands will participate in the sum and which will not. For a fixed $U$, each zero bit in the mask $W$ is assigned its own penalty; in other words, we define $\lambda_U(i) = \sum_{j=-1}^1 [i - j \in U] e(i, j)$, then: $\textrm{nxt}[U] = \min_{W} \left\{ \textrm{prv}[W] + \sum_{i=1}^n [i \not \in W] \cdot \lambda_U(i) \right\}$ To efficiently recalculate, we will build a segment tree. Formally, it will be defined as follows: The segment tree contains $n+1$ layers numbered $0,1,2,\ldots,n$. The indices of the vertices of the $i$-th layer are subsets $\{1,2,\ldots,i\}$, and the two children of the vertex $L$ of the $i$-th layer are the vertices $L$ and $L \cup \{i+1\}$ from the $(i+1)$-th layer. The value at the vertex $L$ from the $i$-th layer is equal to $x(i, L)$, where: $x(i, L) = \min_{D \subseteq \{i+1, \ldots, n\}} \{ \textrm{prv}[L \cup D] + \sum_{j=i+1}^n [j \not \in W] \cdot \lambda_U(j) \}$ $x(i, L) = \min_{D \subseteq \{i+1, \ldots, n\}} \{ \textrm{prv}[L \cup D] + \sum_{j=i+1}^n [j \not \in W] \cdot \lambda_U(j) \}$ It is clear that the value $x(i-1, L)$ can be easily computed as $x(i-1, L) = \min \{x(i, L) + \lambda_U(i), x(i, L \cup \{i\}\}$. This structure is good because when changing $U$, we only need to recalculate the constructed tree up to the last layer where the weight function $\lambda_U$ changed. Now we will iterate over $U$ in the order of Gray codes approximately in the following order: $\emptyset \to \{2\} \to \{2, 3\} \to \{3\} \to \ldots \{1, \ldots \} \to \ldots$ The essence of this process is that we will iterate through all possible sets $U$, where the 2nd bit will change $2^{n-1}$ times, the 3rd bit will change $2^{n-2}$ times, and so on, the $n$-th bit will change $2$ times, while the 1st bit will change only once. When changing the $i$-th bit in $U$, the function $\lambda_U$ will change at points $i-1$, $i$, and $i+1$, thus: If $2 \le i \le n-1$, only the first $i+1$ layers of the segment tree will change, and it will take $2^{i+2}$ operations to recalculate them, which will happen $2^{n+1-i}$ times. If $i = 1$ or $i=n$, the entire tree will need to be rebuilt, but this will only happen $3$ times. The total number of affected vertices when iterating over $U$ will be: $3 \cdot 2^{n+1} + \sum_{i=1}^{n-1} 2^{i+1} \cdot 2^{n+1-i} = 3 \cdot 2^{n+1} + (n-2) 2^{n+2} = (2n-1) \cdot 2^{n+1} = O(n \cdot 2^n)$ Here the constant is not very good and the time constraints are strict, so we will also have to put effort into the implementation. I think it is worth mentioning that using a segment tree on pointers is not a very good idea, and one should write the segment tree in the same indexing as a regular segment tree.
|
[
"dp",
"flows"
] | 3,500
| null |
2098
|
A
|
Vadim's Collection
|
We call a phone number a beautiful if it is a string of $10$ digits, where the $i$-th digit from the left is at least $10 - i$. That is, the first digit must be at least $9$, the second at least $8$, $\ldots$, with the last digit being at least $0$.
For example, 9988776655 is a beautiful phone number, while 9099999999 is not, since the second digit, which is $0$, is less than $8$.
Vadim has a \textbf{beautiful} phone number. He wants to rearrange its digits in such a way that the result is the \textbf{smallest possible beautiful} phone number. Help Vadim solve this problem.
Please note that the phone numbers are compared as integers.
|
Our goal is to obtain the minimally possible string that satisfies the conditions of beauty. Therefore, we need to arrange the digits in order from the first to the last, each time choosing the minimally possible suitable digit. More formally: For the $i$-th position, we need to place the smallest available digit that is not less than $10 - i$. After placing a digit, it becomes unavailable for further use. We repeat the process for all $10$ positions. Why does this work? Notice that when we place a digit in the $i$-th position, we have at least $i$ digits in the original number that are greater than or equal to $10 - i$, meaning that at most $i-1$ digits have been used from them earlier, and there is always some option available. Thus, at each step, it is beneficial to choose the smallest available suitable digit because the number can always be completed to the end, and choosing a larger digit would result in a larger number.
|
[
"brute force",
"greedy"
] | 800
| null |
2098
|
B
|
Sasha and the Apartment Purchase
|
Sasha wants to buy an apartment on a street where the houses are numbered from $1$ to $10^9$ from left to right.
There are $n$ bars on this street, located in houses with numbers $a_1, a_2, \ldots, a_n$. Note that there might be multiple bars in the same house, and in this case, these bars are considered distinct.
Sasha is afraid that by the time he buys the apartment, some bars may close, but \textbf{no more than} $k$ bars can close.
For any house with number $x$, define $f(x)$ as the sum of $|x - y|$ over all open bars $y$ (that is, after closing some bars).
Sasha can potentially buy an apartment in a house with number $x$ (where $1 \le x \le 10^9$) if and only if it is possible to close at most $k$ bars so that after that $f(x)$ becomes minimal among all houses.
Determine how many different houses Sasha can potentially buy an apartment in.
|
Note that for a fixed position, the optimal point is the median (or for an even number of elements, any point between the two medians). Proof: Observe that if we move the point between $x$ and $x + 1$, the answer changes by the difference in the number of elements to the left and right between $x$ and $x + 1$. Therefore, $f(x)$ monotonically decreases to the median. This means that the optimum is indeed located at the median. Now the problem has been reduced to the following: how many such points exist such that if we remove no more than $k$ of the original elements, this point remains the median. Note that to obtain the optimal answer, we can simply remove elements from the beginning or from the end, since if two elements are on different sides of the median, this action does nothing. Thus, our answer lies within the segment, and it is (the formulas need to be adjusted with the correct plus or minus one, we will leave this as an exercise for the participants) $a_{\frac{n + k}{2}} - a_{\frac{n - k}{2}}$, where $a$ is sorted beforehand.
|
[
"math",
"sortings"
] | 1,400
| null |
2101
|
A
|
Mex in the Grid
|
You are given $n^2$ cards with values from $0$ to $n^2-1$. You are to arrange them in a $n$ by $n$ grid such that there is \textbf{exactly} one card in each cell.
The MEX (minimum excluded value) of a subgrid$^{\text{∗}}$ is defined as the smallest non-negative integer that does not appear in the subgrid.
Your task is to arrange the cards such that the sum of MEX values over all $\left(\frac{n(n+1)}{2}\right)^2$ subgrids is maximized.
\begin{footnotesize}
$^{\text{∗}}$A subgrid of a $n$ by $n$ grid is specified by four numbers $l_1, r_1, l_2, r_2$ satisfying $1\le l_1\le r_1\le n$ and $1\le l_2\le r_2\le n$. The element in the $i$-th row and the $j$-th column of the grid is part of the subgrid if and only if $l_1\le i\le r_1$ and $l_2\le j\le r_2$.
\end{footnotesize}
|
What is the MEX, if $0$ is not there? When is the MEX equal to $k$? If we want a subgrid to be included in as many other subgrids as possible, what shape should it have? What about its place? The first fact to notice is that if the subgrid does not contain $0$, then the MEX is $0$ and we can ignore such subgrids. If the MEX is going to be $k$, it means all the values from $0$ to $k-1$ are included in the subgrid. For each $0\le k\le n^2$, we count how many subgrids have a MEX of at least $k$, aiming to maximize this. To find the total sum of all MEX values, for each $k$ we count the number of subgrids that contain all the values from $0$ to $k-1$. Summing these counts gives us the final result. We claim that if our subgrid is nearly square (either a perfect square or a difference of $1$ in lengths), and is placed in the middle of the grid, then the answer is maximized. Proof: The multiplication of two numbers with the same sum is maximized when they are as close to each other as possible. Therefore, for each subgrid, if we consider the empty rows above and below it and separately consider the columns on its right and left, if one is fixed, the place is optimized to be in the middle. Now, similarly, we can say that it is optimal if the length and height of the subgrid are close to each other as well, as increasing one will cause the other one to decrease. With these two observations combined, a subgrid having the above properties is optimal when it's a nearly square subgrid in the middle of the grid. Any ordering with this property works; an easy example is a spiral shape that starts from the center of the grid and turns around itself. We had the idea of dividing this problem into two subtasks, in which the second subtask requires you to count the sum of MEX of all subgrids! Do you have a solution for this? Comment it below
|
[
"constructive algorithms",
"implementation"
] | 1,300
|
def magical_spiral(n):
arr = [[-1] * n for _ in range(n)]
if n % 2 == 0:
x, y = n // 2 - 1, n // 2 - 1
else:
x, y = n // 2, n // 2
arr[x][y] = 0
value = step = 1
dir = [(0, 1), (1, 0), (0, -1), (-1, 0)]
while value < n * n:
for d in range(4):
steps = step
step += d % 2; dx, dy = dir[d]
for _ in range(steps):
x += dx; y += dy
if 0 <= x < n and 0 <= y < n and arr[x][y] == -1:
arr[x][y] = value
value += 1
if value >= n * n:
break
if value >= n * n:
break
for row in arr:
print(" ".join(str(num) for num in row))
print()
t = int(input())
for _ in range(t):
n = int(input())
magical_spiral(n)
|
2101
|
B
|
Quartet Swapping
|
You are given a permutation $a$ of length $n$$^{\text{∗}}$. You are allowed to do the following operation any number of times (possibly zero):
- Choose an index $1\le i\le n - 3$. Then, swap $a_i$ with $a_{i + 2}$, and $a_{i + 1}$ with $a_{i + 3}$ simultaneously. In other words, permutation $a$ will be transformed from $[\ldots, a_i, a_{i+1}, a_{i+2}, a_{i+3}, \ldots]$ to $[\ldots, a_{i+2}, a_{i+3}, a_{i}, a_{i+1}, \ldots]$.
Determine the lexicographically smallest permutation$^{\text{†}}$ that can be obtained by applying the above operation any number of times.
\begin{footnotesize}
$^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
$^{\text{†}}$An array $x$ is lexicographically smaller than an array $y$ of the same size if and only if the following holds:
- in the first position where $x$ and $y$ differ, the array $x$ has a smaller element than the corresponding element in $y$.
\end{footnotesize}
|
Is it possible for an element to change its index parity after some operations? What else is not effected by the operation? How to combine the above two properties under the time limit? The first observation is that the parity of the position of the elements doesn't change with the operations. Thus, if an element is at an odd or even index in the initial permutation, it will retain the same parity in the final permutation. As long as we have four elements left, we can freely adjust them within their index parity zone, but what about the last three ones? We can see that the parity of the number of inversions is fixed, so by counting the inversions of the initial permutation, the order of the last four elements will be uniquely determined. This problem was one of the first problems that we had, and the intended constraits were $n\le 3000$ as a D1A, but later was moved to D1B with $n\le 2\cdot 10^5$! Do you know how to count the parity of inversions without counting the number of inversions?
|
[
"brute force",
"data structures",
"divide and conquer",
"greedy",
"sortings"
] | 1,800
|
#include <bits/stdc++.h>
#define pb push_back
#define int long long
#define F first
#define S second
#define sz(a) (int)a.size()
#define pii pair<int,int>
#define rep(i , a , b) for(int i = (a) ; i <= (b) ; i++)
#define per(i , a , b) for(int i = (a) ; i >= (b) ; i--)
#define all(a) a.begin(),a.end()
using namespace std ;
const int maxn = 1e6 + 10 ;
int a[maxn] , n , fen[maxn] ;
void upd(int x){
while(x <= n){
fen[x]++;
x += x&-x;
}
}
int que(int x){
int ans =0 ;
while(x){
ans += fen[x] ;
x -= x&-x;
}
return ans ;
}
int f(vector<int> x){
rep(i , 0, n)fen[i] =0;
int ans =0 ;
per(i , sz(x)-1 , 0){
ans += que(x[i]);
upd(x[i]) ;
}
return ans ;
}
signed main(){
ios::sync_with_stdio(0) ; cin.tie(0);
int T ;
cin >> T ;
while(T--){
vector <int> a1 , a2 ;
cin >> n ;
rep(i ,1 ,n){
int x; cin >> x;
if (i%2==1) {
a1.pb(x);
} else {
a2.pb(x);
}
}
bool v = (f(a1)%2 != f(a2)%2);
sort(all(a1)); sort(all(a2));
int x1 = 0, x2 =0;
rep (i ,1 , n) {
if (i%2==1) {
a[i] = a1[x1] ; x1++;
} else {
a[i] = a2[x2] ; x2++;
}
}
if (v) {
swap(a[n] , a[n-2]) ;
}
rep(i ,1 ,n) cout << a[i] << " ";
cout << "\n";
}
}
|
2101
|
C
|
23 Kingdom
|
The distance of a value $x$ in an array $c$, denoted as $d_x(c)$, is defined as the largest gap between any two occurrences of $x$ in $c$.
Formally, $d_x(c) = \max(j - i)$ over all pairs $i < j$ where $c_i = c_j = x$. If $x$ appears only once or not at all in $c$, then $d_x(c) = 0$.
The beauty of an array is the sum of the distances of each distinct value in the array. Formally, the beauty of an array $c$ is equal to $\sum\limits_{1\le x\le n} d_x(c)$.
Given an array $a$ of length $n$, an array $b$ is nice if it also has length $n$ and its elements satisfy $1\le b_i\le a_i$ for all $1\le i\le n$. Your task is to find the maximum possible beauty of any nice array.
|
Does it matter if there are $2$ occurance of a number or $200$? What greedy approaches can you think of? Are they fast enough? The first observation is that we only care about the first and last occurrences of each number in the final array, as anything in between doesn't change the maximum distance. Instead of counting the distances directly, the main idea of this problem is to break them down into pieces. For example, the distance of a pair $(i, j)$ such that $1\le i < j\le n$ and $a_i = a_j$ and also these indices are the farthest possible, instead of adding up $j-i$, we count one for each index in between $i$ and $j$. In Another word, for each index $i$, we count how many begins are in or before $i$, and how many ends are after $i$, adding up these values for each index gives us the sum of distances. For each prefix and suffix of our array, we count the maximum number of different begins that we can get greedily. It can be proven that if we try to match each $a_i$ with the biggest unmatched number left, we'll end up getting the maximum number of different matches possible. Doing so can be calculated in many different ways, one using sets is shown in the implementation below. After that, for each $1\le i < n$, we have a bound for the number of pairs that these indices can be in between of, as we calculated the maximum distinct matches possible for both sides (one prefix and one suffix). It can be proven that there exists an array $b$ meeting the problem conditions such that the bound for each index is held, and therefore summing up all values gives us the final answer. We thought every greedy approach for this problem is wrong, and the intended solution used to be a $O(n^3)$ DP, but later realized we were wrong! Can you think of a rational DP idea that despite the time limit can solve this question?
|
[
"binary search",
"brute force",
"data structures",
"greedy",
"ternary search",
"two pointers"
] | 2,200
|
#include <bits/stdc++.h>
#define int long long
// #pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")
// #pragma GCC optimize("O3")
// #pragma GCC optimize("unroll-loops")
#define F first
#define S second
#define mp make_pair
#define pb push_back
#define all(x) x.begin(), x.end()
#define kill(x) cout << x << "\n", exit(0);
#define pii pair<int, int>
#define pll pair<long long, long long>
#define endl "\n"
using namespace std;
typedef long long ll;
// typedef __int128_t lll;
typedef long double ld;
const int MAXN = (int)1e6 + 7;
const int MOD = (int)1e9 + 7;
const ll INF = (ll)1e18 + 7;
int n, m, k, tmp, t, tmp2, tmp3, tmp4, u, v, w, p, q, ans, flag;
int arr[MAXN], f[MAXN][2];
set<int> st;
void solve() {
cin >> n;
for (int i=1; i<=n; i++) cin >> arr[i];
for (int j=0; j<2; j++) {
st.clear();
for (int i=1; i<=n; i++) st.insert(i);
for (int i=(j? n : 1); (j?i>1 : i<n); (j? i-- : i++)) {
auto it = st.upper_bound(arr[i]);
if (it != st.begin()) it--, st.erase(*it);
f[i][j] = n-st.size();
}
}
ans = 0;
for (int i=1; i<n; i++) ans += min(f[i][0], f[i+1][1]);
cout << ans << endl;
}
int32_t main() {
#ifdef LOCAL
freopen("inp.in", "r", stdin);
freopen("res.out", "w", stdout);
#else
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
#endif
cin >> t;
while (t--) solve();
return 0;
}
|
2101
|
D
|
Mani and Segments
|
An array $b$ of length $|b|$ is cute if the sum of the length of its Longest Increasing Subsequence (LIS) and the length of its Longest Decreasing Subsequence (LDS)$^{\text{∗}}$ is \textbf{exactly} one more than the length of the array. More formally, the array $b$ is cute if $\operatorname{LIS}(b) + \operatorname{LDS}(b) = |b| + 1$.
You are given a permutation $a$ of length $n$$^{\text{†}}$. Your task is to count the number of non-empty subarrays$^{\text{‡}}$ of permutation $a$ that are cute.
\begin{footnotesize}
$^{\text{∗}}$A sequence $x$ is a subsequence of a sequence $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) element from arbitrary positions.
The longest increasing (decreasing) subsequence of an array is the longest subsequence such that its elements are ordered in strictly increasing (decreasing) order.
$^{\text{†}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
$^{\text{‡}}$An array $x$ is a subarray of an array $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
\end{footnotesize}
|
What are some characteristics of a cute subarray? Is cuteness a monotone property? How to find the maximal cute subarray for a fixed index $i$? The first observation is that a subarray is cute, iff there is an index $i$, such that it is the one and only share of the LIS and LDS, and all other elements belong to either LIS or LDS. Proof: We know that there cannot be two elements belonging to both LIS and LDS, and if there is none, the property won't be held. Therefore, we conclude that there is exactly one element shared between the LIS and LDS. Diving deeper into the shared element, we can see that the smaller elements before it must all be ascending, and the bigger elements after it must also all be ascending (as this will make our LIS). Similarly, all the bigger elements before it and all the smaller elements after it should be descending. This gives us a good pattern, and we can easily see that any subarray included within a cute subarray is cute itself (call this monotonic property). For counting the total number of cute subarrays, we calculate the values of $R_i$ and $L_i$ for each index. Let $L_i$ be the biggest index before $i$, such that the condition above is met. Similarly define $R_i$ as the $smallest$ index after $i$. If we can somehow calculate this values for all indices, then we'll end up with $n$ segments (for each $1\le i\le n$), and a subarray is cute iff it's included in at least one segment. This is a very classic problem, so the solution won't be discussed here. The only detail left is how to calculate the values of $L_i$ and $R_i$? We'll discuss the process of calculating the $R_i$ here, but the $L_i$ follows a very similar pattern. We claim that $R_i = min(R_{i+1}, f(i))$. In here $f(i)$ is defined as follow: if $a_i > a_{i+1}$: then $f(i)$ is minimum index $j$, such that $a_{i+1}\le a_j\le a_i$. if $a_i < a_{i+1}$: in this case $f(i)$ is minimum $j$ such that $a_i\le a_j\le a_{i+1}$. Proof: The fact that $R_i\le R_{i+1}$ can easily be concluded from the fact thay cuteness is a monotonic property. Also going back to the conditions described above, we can see that $f(i)$ is basically the first index that will ruin our condition, so based on the definition of $R_i$, we can say that $R_i = min(R_{i+1}, f(i))$. The calculation of $L_i$ and $R_i$ can both be implemented using a Segment Tree or a Monotonic Stack; both implementations are included below. After we have those values, we count the number of subarrays included in at least one segment, and that would be our final answer. We ended up having two D1D difficulty problems, but in the end chose this one as it was more cute! We believe this was one of the cleanest problems we had. However, both the statement and solution made it look too classic, so we had a hard time making sure this idea had not been used elsewhere! Do you have a different approach than ours? Tell us in the comments.
|
[
"data structures",
"implementation",
"sortings",
"two pointers"
] | 2,500
|
#include <bits/stdc++.h>
#define pb push_back
#define int long long
#define F first
#define S second
#define sz(a) (int)a.size()
#define pii pair<int,int>
#define rep(i , a , b) for(int i = (a) ; i <= (b) ; i++)
#define per(i , a , b) for(int i = (a) ; i >= (b) ; i--)
#define all(a) a.begin(),a.end()
using namespace std ;
const int maxn = 5e5 + 10 , mod = 1e9 + 7;
int a[maxn] , ri[maxn] , le[maxn] , bl[maxn] , br[maxn] , sl[maxn] , sr[maxn] ;
signed main(){
ios::sync_with_stdio(0) ; cin.tie(0);
int T;
cin >> T;
while(T--) {
int n; cin >> n;
rep(i ,1, n) {
cin >> a[i];
}
vector <int> s , b;
rep(i , 1 ,n){
while(sz(s) && a[s.back()] > a[i])s.pop_back() ;
while(sz(b) && a[b.back()] < a[i])b.pop_back() ;
sl[i] = (sz(s) ? s.back() : 0);
bl[i] = (sz(b) ? b.back() : 0);
s.pb(i); b.pb(i) ;
}
s.clear();
b.clear();
per(i , n , 1){
while(sz(s) && a[s.back()] > a[i])s.pop_back() ;
while(sz(b) && a[b.back()] < a[i])b.pop_back() ;
sr[i] = (sz(s) ? s.back() : n+1);
br[i] = (sz(b) ? b.back() : n+1);
s.pb(i); b.pb(i) ;
}
ri[n] = n;
per(i , n-1 ,1){
ri[i] = ri[i+1];
if(a[i] > a[i+1] && a[br[i+1]] < a[i]){
ri[i] = min(ri[i] , br[i+1]-1);
}
if(a[i] < a[i+1] && a[sr[i+1]] > a[i]){
ri[i] = min(ri[i] , sr[i+1]-1);
}
}
le[1] = 1 ;
rep(i , 2,n){
le[i] = le[i-1] ;
if(a[i] > a[i-1] && a[bl[i-1]] < a[i]){
le[i] = max(le[i] , bl[i-1]+1);
}
if(a[i] < a[i-1] && a[sl[i-1]] > a[i]){
le[i] = max(le[i] , sl[i-1]+1);
}
}
int ans = ri[1] ;
rep(i , 2 , n){
ans = (ans + (i-1 - le[i] + 1) * (ri[i] - ri[i-1]) + ri[i]-i+1) ;
}
cout << ans << "\n" ;
}
}
|
2101
|
E
|
Kia Bakes a Cake
|
You are given a binary string $s$ of length $n$ and a tree $T$ with $n$ vertices. Let $k$ be the number of 1s in $s$. We will construct a complete undirected weighted graph with $k$ vertices as follows:
- For each $1\le i\le n$ with $s_i = \mathtt{1}$, create a vertex labeled $i$.
- For any two vertices labeled $u$ and $v$ that are created in the above step, define the edge weight between them $w(u, v)$ as the distance$^{\text{∗}}$ between vertex $u$ and vertex $v$ in the tree $T$.
A \textbf{simple} path$^{\text{†}}$ that visits vertices labeled $v_1, v_2, \ldots, v_m$ in this order is nice if for all $1\le i\le m - 2$, the condition $2\cdot w(v_i, v_{i + 1})\le w(v_{i + 1}, v_{i + 2})$ holds. In other words, the weight of each edge in the path must be at least twice the weight of the previous edge. Note that $s_{v_i} = \mathtt{1}$ has to be satisfied for all $1\le i\le m$, as otherwise, there would be no vertex with the corresponding label.
For each vertex labeled $i$ ($1\le i\le n$ and $s_i = \mathtt{1}$) in the complete undirected weighted graph, determine the maximum number of vertices in any nice simple path starting from the vertex labeled $i$.
\begin{footnotesize}
$^{\text{∗}}$The distance between two vertices $a$ and $b$ in a tree is equal to the number of edges on the unique simple path between vertex $a$ and vertex $b$.
$^{\text{†}}$A path is a sequence of vertices $v_1, v_2, \ldots, v_m$ such that there is an edge between $v_i$ and $v_{i + 1}$ for all $1\le i\le m - 1$. A simple path is a path with no repeated vertices, i.e., $v_i\neq v_j$ for all $1\le i < j\le m$.
\end{footnotesize}
|
Can we see any node twice in our walk? What is the upper bound on the length of the path? How do we approach this using dynamic programming? How to optimize the DP? The length of each two nodes in a tree is always less than the number of nodes, and therefore the maximum weight that we can see in our constructed graph has an upper bound of $n-1$. Because the weight of any nice path is doubled every time, the maximum length of any nice path won't exceed $log_2(n)$. The next observation is that we cannot see any node twice in a nice path (in other words, there are no cycles). Proof: Let's assume that a cycle exists. We know that the weight of each edge is always greater than the sum of all previous edges, so even without any waste, $\sum_{i=0}^{k} 2^i < 2^{k+1}$. This tells us no matter how far we get, because we're taking more steps in the last edge than the sum of all previous ones, we can never end up in the first node. Therefore, every nice path is a simple path, and no cycles can be made. Now, let's define $dp_{i, j}$ as the maximum weight of any edge to start a simple path from vertex $j$ and take $i$ steps, and $0$ if not possible. The idea here is to build the nicest path reversly, from the end to the beginning. We can update this $dp$ and build the paths backwards, and this will give us an $O(n^2\cdot log(n))$ solution as $i\le log_2(n)$. To optimize this, we use the suffix trick on centroid decomposition and keep the maximum two suffixes every time, to update $dp_{i, ?}$ in increasing order of $i$ accordingly. This would result in a $O(n\cdot log(n)^3)$ implementation, which can be further optimized to $O(n\cdot log(n)^2)$ using the fact that we can use counting sort for sorting as we have an upper bound of $n$. Both implementations are included below. We aimed to kill $log(n)^3$ implementations but then ended up making some correct $log(n)^2$ ones TLE, and gave up! What was your favorite part about this problem? Comment it below.
|
[
"data structures",
"dp",
"greedy",
"trees"
] | 3,100
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using ld = long double;
using pii = pair<int, int>;
using pll = pair<long long, long long>;
using ull = unsigned long long;
#define X first
#define Y second
#define SZ(x) int(x.size())
#define all(x) x.begin(), x.end()
#define mins(a,b) (a = min(a,b))
#define maxs(a,b) (a = max(a,b))
#define pb push_back
#define Mp make_pair
#define lc id<<1
#define rc lc|1
#define mid ((l+r)>>1)
mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count());
const ll INF = 1e9 + 23;
const ll MOD = 1e9 + 7;
const int MXN = 7e4+2;
const int LOG = 18;
int n;
string s;
vector<int> g[MXN];
int dp[LOG][MXN];
bool dead[MXN];
int sz[MXN];
int get_sz(int v, int p=0) {
sz[v] = 1;
for(int u : g[v])
if(!dead[u] && u!=p)
sz[v] += get_sz(u, v);
return sz[v];
}
int centroid(int v, int N, int p=0) {
for(int u : g[v])
if(!dead[u] && u!=p && sz[u]+sz[u]>N)
return centroid(u, N, v);
return v;
}
vector<pair<int, int>> U;
vector<pair<int, int>> G;
int h[MXN], par[MXN];
void dfs(int i, int v, int p=0) {
if(s[v-1]=='1') {
G.pb({h[v]<<1, v});
if(dp[i-1][v]>=2*h[v]) U.pb({dp[i-1][v]-2*h[v], v});
}
for(int u : g[v])
if(!dead[u] && u!=p)
par[u] = par[v],
h[u] = h[v]+1,
dfs(i, u, v);
}
void solve(int i, int v) {
dead[v=centroid(v,get_sz(v))] = 1;
U.clear();
G.clear();
h[v] = 0;
par[v] = v;
if(s[v-1]=='1') {
G.pb({h[v]<<1, v});
if(dp[i-1][v]>=2*h[v]) U.pb({dp[i-1][v]-2*h[v], v});
}
for(int u : g[v])
if(!dead[u]) {
par[u] = u;
h[u] = 1;
dfs(i, u);
}
sort(all(U), greater<>());
sort(all(G), greater<>());
int mx1=0, mx2=0, ptr=0;
for(auto [lim, u] : G) {
while(ptr<SZ(U) && U[ptr].X>=lim) {
if(mx1 && par[mx1]==par[U[ptr].Y]) {
if(h[U[ptr].Y]>h[mx1]) mx1 = U[ptr].Y;
}
else if(mx2 && par[mx2]==par[U[ptr].Y]) {
if(h[U[ptr].Y]>h[mx2]) mx2 = U[ptr].Y;
if(h[mx2]>h[mx1]) swap(mx1, mx2);
}
else {
if(!mx1 || h[U[ptr].Y]>h[mx1]) mx2=mx1, mx1=U[ptr].Y;
else if(!mx2 || h[U[ptr].Y]>h[mx2]) mx2=U[ptr].Y;
}
ptr++;
}
if(mx1) {
if(mx1 && par[mx1]!=par[u]) maxs(dp[i][u], h[mx1]+h[u]);
else if(mx2) maxs(dp[i][u], h[mx2]+h[u]);
}
}
for(int u : g[v])
if(!dead[u])
solve(i, u);
}
void Main() {
cin >> n >> s;
for(int i=1; i<=n; i++) g[i].clear();
for(int i=1,u,v; i<n; i++) {
cin >> u >> v;
g[u].pb(v);
g[v].pb(u);
}
for(int i=1; i<=n; i++)
if(s[i-1]=='1')
dp[0][i] = 2*n-2;
else
dp[0][i] = -1;
for(int i=1; i<LOG; i++) {
fill(dead+1, dead+n+1, 0);
fill(dp[i]+1, dp[i]+n+1, -1);
solve(i, 1);
}
for(int i=1; i<=n; i++)
if(s[i-1]=='1') {
for(int j=LOG-1; j>=0; j--)
if(dp[j][i]!=-1) {
cout << j+1 << ' ';
break;
}
}
else cout << "-1 ";
cout << '\n';
}
int32_t main() {
cin.tie(0); cout.tie(0); ios_base::sync_with_stdio(0);
int T = 1;
cin >> T;
while(T--) Main();
return 0;
}
|
2101
|
F
|
Shoo Shatters the Sunshine
|
You are given a tree with $n$ vertices, where each vertex can be colored red, blue, or white. The coolness of a coloring is defined as the maximum distance$^{\text{∗}}$ between a red and a blue vertex.
Formally, if we denote the color of the $i$-th vertex as $c_i$, the coolness of a coloring is $\max d(u, v)$ over all pairs of vertices $1\le u, v\le n$ where $c_u$ is red and $c_v$ is blue. If there are no red or no blue vertices, the coolness is zero.
Your task is to calculate the sum of coolness over all $3^n$ possible colorings of the tree, modulo $998\,244\,353$.
\begin{footnotesize}
$^{\text{∗}}$The distance between two vertices $a$ and $b$ in a tree is equal to the number of edges on the unique simple path between vertex $a$ and vertex $b$.
\end{footnotesize}
|
What are some properties of such colored trees? Is the farthest blue and red node unique? Should we look for an $O(n^2)$ solution or optimizes a slower one? Note: For the purpose of this editorial, the diameter of a colored tree is defined as a simple path with maximum coolness which has a blue and red node on each side. Let's say a node is special iff the distance of the farthest blue node from it plus the distance of the farthesrt red node is equal to the length of the diameter of the tree. We also define a special edge as an edge such that the distances of the farthes red nodes on both sides are equal, and the distances of the blue nodes on each side are also equal to each other. Something like this: We can see that the sum of distances of the farthest blue and green plus one is the length of the diameter. We claim that if there is no special node in a colored tree, then there must be a special edge. Proof: If there is no special node in a tree, then that means the farthest red and blue node for each vertex are on one side of it. We know that if there is at least one blue and one red colored node, then we have a valid diameter, so considering each node on that diameter, becuase it is not a special node the farthest blue and red of it must be on one side, and this leasds us towards a speical edge, the formal proof is described later when discussing $T'$. Let's make some more observations. It's easy to see that the diameter of a tree is not necessarily unique, but an interesting fact is that every two valid diameters must intersect with each other. Proof: if not, consider the tree is rooted from the blue/red node of one diameter, this would trivially show us a new blue and red node that has a distance longer than our current diameter and therefore a contradiction. This gives us the idea that the intersection of all valid diameters is a non-empty simple path on the tree. Proof: Consider two diameters and take their intersection, then add the rest one by one and take the intersection of the intersections. If there are any two pairs with intersections that do not intersect, this means there exists a pair of valid diameters which do not intersect with each other, and this is against our hypothesis. Before continuing, you must have an idea so far that we are aiming to count the trees with special node based on their node, and others based on their special edge, but there is still a lot to discuss. Going back to the special nodes, it's easy to see that they must must be on our diameter, and what makes this more interesting is that they must be on all diameters! So, taking the simple path that was the intersection of all valid diameters, we know that all of our special nodes lie on this path. Getting deeper on special nodes, we can see that for a non-special node such as $v$, the farthest blue and red are both on the same subtree if we root the tree from $v$. This makes the sum of distances to farthest red plus the farthest blue result in a number always bigger than the actual length of our diameter (as there is a shared part). Call the child in which both the farthest red and blue lie on a cool child. Let's create another graph based on our tree (call it $T'$) that has identical nodes to the tree, and directed edges. For each non-special node, give a directed edge from the identical vertex in $T'$ to its cool child. It's easy to see that special nodes would not have any incoming edges to them in this new graph, but every non-special node has at least one. We can use this graph to formally prove almost everything. Let's start with the fact that why any tree without a special node has a special edge? Proof: No special edge means that there are $n$ nodes in our tree that each have an outgoing edge, therefore $n$ directed edges as well. Our initial tree had only $n-1$ edge, so therefore based on the pigeonhole principle, there must be an edge in the initial tree, that two directed edges pass above it. It's trivial that those edges cannot have the same direction, and therefore this will lead us to a directed cycle of length two. Well, think about it, this is exactly what a special edge is! So if no special node is there, now we know that there is at least one special edge there somewhere. Well, it appears that we also cannot have more than one special edge in a colored tree. Proof: If so, we know that both of them must appear on the diameter, and considering their placement, this will contradict the fact that the distance of red and blue from both sides must be equal. Combining both above observations, now we know that if there is no special node in the colored tree, then there is exactly one special edge there somewhere. Now let's solve the first sub-problem, which is how to solve the problem if there is no special node. Define $f(v, a, b)$ for a node $v$ as the number of colourings which the farthest blue has a distance of $a$, and the farthest red has a distance of $b$. How to calculate $f$? Define $g(v, a, b)$ as the same thing, but this time for the distance of at most $a$ for blue and at most $b$ for red. It's easy to see that if we somehow calculate $g(v, a, b)$, then the following equation holds: $\begin{equation} f(v, a, b) = g(v, a, b) - g(v, a-1, b) - g(v, a, b-1) + g(v, a-1, b-1) \end{equation}$ $\begin{equation} g(v, a, b) = 3^{p\cdot q}\cdot 1^r \end{equation}$ $p = \#\{u \mid d(u, v) \le a \}$ $q = \#\{ u \mid a < d(u, v) \le b \}$ $r = \#\{ u \mid b < d(u, v) \}$ This will so far give us an $O(n^3)$ solution, dw we'll optimize it further later. Now we count the result for each edge of the tree, considering it will be a special edge in some colourings. Call both sides of the edge $u$ and $v$ just like the diagram shown above. Assuming we have $f$ calculated for all nodes, the answer is: $\begin{equation} \sum_{a=0}^{n} \sum_{b=0}^{n} f(u, a, b) \cdot f(v, a, b) \cdot (a + b + 1) \end{equation}$ Let's define a super-special node a node that is special, and not only does it have the farthest blue and red on different sides, but also either a blue node with distance one less than the farthest blue, or a red node with distance one less than the farthest red. Using the same observations that we made earlier and a deeper understanding of $T'$, you can prove that there is at least one super-special node, also at most one super-special node, and therefore always exactly one super-special node if there exists at least one special node. For such trees, we aim to calculate the answer for them based on their super-special node. Make the tree rooted from an arbitrary node. Define $dp_{v, i, a, b, j}$ as if you're in vertext $v$, you iterated $i$ children, so far the farthest blue node has a distance of $a$, the farthest red node has a distance of $b$, and $j=0$ if these two don't belong to one subtree (if possible, preferably $0$), and $1$ otherwise. Iterate over new subtrees and use the $f$ function defined before to end up with a computable $O(n^4)$ implementation! Now add another dimension $k$ to the above DP to keep track of if there is any blue node with distance one less than the farthest blue, or a red node with distance one less than the farthest red. Note that this really doesn't change anything. The above DP, though correct, won't give us a good solution and is only explained for gaining deeper insight. After truly understanding what is going on and what we are looking for, we can break down the same idea and calculate it manually, which will result in a $O(n^2)$ solution in the end. The primary fact to notice is that if the number of edges between two farthest red vertices is even, then it can be proven that the middle vertex of this path is a special node, and if odd, the middle edge is a special edge. As explaining everything in detail takes forever, try to prove the things we missed yourself and ask if you encountered any issues. The intended solution was the $O(n^3)$ discussed with some further optimizations, but then a new $O(n^2)$ solution showed up that changed everything! We thought of dividing the problem into two subtasks, but didn't do so, as the observation required for the $O(n^2)$ solution was quite beautiful. Also, we would like to thank Hamed_Ghaffari for his contributions to this problem.
|
[
"combinatorics",
"dp",
"trees"
] | 3,300
|
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using ld = long double;
using pii = pair<int, int>;
using pll = pair<long long, long long>;
using ull = unsigned long long;
#define X first
#define Y second
#define SZ(x) int(x.size())
#define all(x) x.begin(), x.end()
#define mins(a,b) (a = min(a,b))
#define maxs(a,b) (a = max(a,b))
#define pb push_back
#define Mp make_pair
#define lc id<<1
#define rc lc|1
#define mid ((l+r)>>1)
mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count());
const ll INF = 1e9 + 23;
const ll MOD = 998244353;
const int MXN = 3006;
const int LOG = 23;
template<typename T>
inline T md(T x) { return x — (x>=MOD ? MOD : 0); }
template<typename T>
inline void fix(T &x) { x = md(x); }
int n;
vector<int> g[MXN];
vector<int> H[MXN];
int cnt[MXN], pscnt[MXN];
ll red[MXN], redb[MXN];
ll two[MXN], thr[MXN], itwo[MXN];
ll ans;
ll ps[MXN];
void dfs(int v, int h=0, int p=-1) {
cnt[h]++;
H[h].back()++;
for(int u : g[v])
if(u!=p)
dfs(u, h+1, v);
}
void Main() {
cin >> n;
for(int i=1; i<=n; i++) g[i].clear();
ans = 0;
for(int i=0, u,v; i<n-1; i++) {
cin >> u >> v;
g[u].pb(v);
g[v].pb(u);
}
for(int v=1; v<=n; v++) {
for(int adj=-1; adj<SZ(g[v]); adj++) {
for(int i=0; i<n; i++) H[i].clear(), cnt[i]=0;
if(adj==-1) {
cnt[0]++;
for(int u : g[v]) {
for(int i=0; i<n; i++) H[i].pb(0);
dfs(u, 1, v);
}
}
else {
if(g[v][adj]<v) continue;
for(int i=0; i<n; i++) H[i].pb(0);
dfs(v, 0, g[v][adj]);
for(int i=0; i<n; i++) H[i].pb(0);
dfs(g[v][adj], 0, v);
}
pscnt[0] = cnt[0];
for(int i=1; i<n; i++) pscnt[i] = pscnt[i-1] + cnt[i];
red[0] = 1;
redb[0] = 1;
for(int i=1; i<n; i++) {
red[i] = two[cnt[i]]-1;
for(int a : H[i])
fix(red[i] += MOD-(two[a]-1));
redb[i] = md(thr[cnt[i]]-two[cnt[i]]+MOD);
for(int a : H[i])
fix(redb[i] += MOD-(thr[a]-two[a]+MOD)*two[cnt[i]-a]%MOD);
}
for(int i=0; i<n; i++) {
ps[i] = red[i]*two[i==0?0:pscnt[i-1]]%MOD*(i+(adj!=-1))%MOD;
if(i) fix(ps[i] += ps[i-1]);
}
for(int j=0; j<n; j++)
fix(ans += thr[j==0?0:pscnt[j-1]]
*(thr[cnt[j]]-two[cnt[j]]+MOD)%MOD
*itwo[pscnt[j]]%MOD
*(ps[n-1]-ps[j]+MOD)%MOD);
for(int i=0; i<n; i++) {
ps[i] = red[i]*two[i==0?0:pscnt[i-1]]%MOD;
if(i) fix(ps[i] += ps[i-1]);
}
for(int j=0; j<n; j++)
fix(ans += thr[j==0?0:pscnt[j-1]]
*(thr[cnt[j]]-two[cnt[j]]+MOD)%MOD
*itwo[pscnt[j]]%MOD
*j%MOD
*(ps[n-1]-ps[j]+MOD)%MOD);
for(int i=0; i<n; i++) {
ps[i] = redb[i]
*thr[i==0?0:pscnt[i-1]]%MOD
*itwo[pscnt[i]]%MOD
*(i+(adj!=-1))%MOD;
if(i) fix(ps[i] += ps[i-1]);
}
for(int j=1; j<n; j++)
fix(ans += two[pscnt[j-1]]%MOD
*(two[cnt[j]]-1)%MOD
*ps[j-1]%MOD);
for(int i=0; i<n; i++) {
ps[i] = redb[i]
*thr[i==0?0:pscnt[i-1]]%MOD
*itwo[pscnt[i]]%MOD;
if(i) fix(ps[i] += ps[i-1]);
}
for(int j=1; j<n; j++)
fix(ans += two[pscnt[j-1]]%MOD
*(two[cnt[j]]-1)%MOD
*j%MOD
*ps[j-1]%MOD);
for(int i=0; i<n; i++) {
fix(ans += (redb[i]-red[i]+MOD)*thr[i==0?0:pscnt[i-1]]%MOD*(i+i+(adj!=-1))%MOD);
}
}
}
cout << ans << '\n';
}
int32_t main() {
cin.tie(0); cout.tie(0); ios_base::sync_with_stdio(0);
int T = 1;
cin >> T;
two[0] = 1;
for(int i=1; i<MXN; i++) two[i] = two[i-1]*2%MOD;
thr[0] = 1;
for(int i=1; i<MXN; i++) thr[i] = thr[i-1]*3%MOD;
itwo[0] = 1;
for(int i=1; i<MXN; i++) itwo[i] = itwo[i-1]*((MOD+1)/2)%MOD;
while(T--) Main();
return 0;
}
|
2102
|
A
|
Dinner Time
|
Given four integers $n$, $m$, $p$, and $q$, determine whether there exists an integer array $a_1, a_2, \ldots, a_n$ (elements may be negative) satisfying the following conditions:
- The sum of all elements in the array is equal to $m$: $$a_1 + a_2 + \ldots + a_n = m$$
- The sum of every $p$ consecutive elements is equal to $q$: $$a_i + a_{i + 1} + \ldots + a_{i + p - 1} = q,\qquad\text{ for all }1\le i\le n-p+1$$
|
If you have the first $p$ elements, what happens to the rest? What happens if $n\%p\neq0$? How can the fact that we can also use negative numbers be useful? The first thing to realize is that if the first $p$ elements of the array are determined, the rest will be unique ($a_i = a_{i-p}$ to hold the sum). If $p$ divides $n$, there will be $\frac{n}{p}$ blocks, each summing to $q$, so their total sum which is $q\cdot \frac{n}{p}$ should be equal to $m$. If not, the incomplete block that is created at the end can be used to cover any difference in sum, so it's always possible! This problem was supposed to be for cyclic arrays, and also required construction, but due to D2A difficulty it was adjusted! Do you have a solution for the cyclic array version? Comment it below
|
[
"constructive algorithms",
"math"
] | 900
|
t = int(input())
for _ in range(t):
n, m, p, q = map(int, input().split())
if n % p == 0 and (n // p) * q != m:
print("NO")
else:
print("YES")
|
2102
|
B
|
The Picky Cat
|
You are given an array of integers $a_1, a_2, \ldots, a_n$. You are allowed to do the following operation any number of times (possibly zero):
- Choose an index $i$ ($1\le i\le n$). Multiply $a_i$ by $-1$ (i.e., update $a_i := -a_i$).
Your task is to determine whether it is possible to make the element at index $1$ become the median of the array after doing the above operation any number of times. Note that operations can be applied to index $1$ as well, meaning the median can be either the original value of $a_1$ or its negation.
The median of an array $b_1, b_2, \ldots, b_m$ is defined as the $\left\lceil \frac{m}{2} \right\rceil$-th$^{\text{∗}}$ smallest element of array $b$. For example, the median of the array $[3, 1, 2]$ is $2$, while the median of the array $[10, 1, 8, 3]$ is $3$.
It is guaranteed that the absolute value of the elements of $a$ are distinct. Formally, there are no pairs of indices $1\le i < j\le n$ where $|a_i| = |a_j|$.
\begin{footnotesize}
$^{\text{∗}}$$\lceil x \rceil$ is the ceiling function which returns the least integer greater than or equal to $x$.
\end{footnotesize}
|
Does the initial sign and order of elements matter? If an element is before the median in the sorted array, can you shift it forward? What if it is initially after the median? Before starting, we take the absolute value of all elements since their signs do not matter. Then, if the first element of the array is equal to or smaller than the $\left\lfloor \frac{n}{2}\right\rfloor + 1$ smallest element of the array, the answer is possible. Else, it is impossible. The proof is as follows: If the first element is equal to or smaller than the $\left\lceil\frac{n}{2}\right\rceil$ smallest element of the array, we can negate the big values until the first element becomes the $\left\lceil\frac{n}{2}\right\rceil$ smallest element of the array, hence becoming the median. If the first element is equal to or smaller than the $\left\lceil\frac{n}{2}\right\rceil$ smallest element of the array, we can negate the big values until the first element becomes the $\left\lceil\frac{n}{2}\right\rceil$ smallest element of the array, hence becoming the median. If the first element is equal to the $\left\lfloor \frac{n}{2}\right\rfloor + 1$ smallest element of the array, we can negate the entire array, which results in the first element becoming the $\left\lceil\frac{n}{2}\right\rceil$ smallest element of the array. If the first element is equal to the $\left\lfloor \frac{n}{2}\right\rfloor + 1$ smallest element of the array, we can negate the entire array, which results in the first element becoming the $\left\lceil\frac{n}{2}\right\rceil$ smallest element of the array. Otherwise, if the first element of the array is larger than the $\left\lfloor \frac{n}{2}\right\rfloor + 1$ smallest element of the array, negating values smaller than it has no effect, while negating values larger than it just makes the first element further away from the median position. You can verify that negating the first element itself does not help either by using similar cases. Otherwise, if the first element of the array is larger than the $\left\lfloor \frac{n}{2}\right\rfloor + 1$ smallest element of the array, negating values smaller than it has no effect, while negating values larger than it just makes the first element further away from the median position. You can verify that negating the first element itself does not help either by using similar cases. Would it be easier to calculate the answer for all indices and just output for the first index? What's up with the name of the problem?
|
[
"implementation",
"sortings"
] | 900
|
t = int(input())
for _ in range(t):
n = int(input())
a = list(map(int, input().split()))
pairs = [(abs(a[i]), i) for i in range(n)]
pairs.sort()
ans = [0] * n
for i in range(n // 2 + 1):
_, index = pairs[i]
ans[index] = 1
if ans[0]:
print("YES")
else:
print("NO")
|
2103
|
A
|
Common Multiple
|
You are given an array of integers $a_1, a_2, \ldots, a_n$. An array $x_1, x_2, \ldots, x_m$ is beautiful if there exists an array $y_1, y_2, \ldots, y_m$ such that the elements of $y$ are distinct (in other words, $y_i\neq y_j$ for all $1 \le i < j \le m$), and the product of $x_i$ and $y_i$ is the same for all $1 \le i \le m$ (in other words, $x_i\cdot y_i = x_j\cdot y_j$ for all $1 \le i < j \le m$).
Your task is to determine the maximum size of a subsequence$^{\text{∗}}$ of array $a$ that is beautiful.
\begin{footnotesize}
$^{\text{∗}}$A sequence $b$ is a subsequence of a sequence $a$ if $b$ can be obtained from $a$ by the deletion of several (possibly, zero or all) element from arbitrary positions.
\end{footnotesize}
|
We need to solve the same problem $t$ times. We will make use of loops to do so. From now we will focus on solving a problem for one test case, but remember that all of the following will be in a loop. So the array $x$ is beautiful if there exists an array of distinct elements $y$ such that $x_i \cdot y_i$ is the same for all $1 \le i \le n$. We will try to make the condition into something more understandable. Suppose that the common product is some constant $C$. How can you express each $y_i$ in terms of $C$ and $x_i$? It can be expressed as $y_i = \frac{C}{x_i}$. If all $y_i = \frac{C}{x_i}$ must be all different, what does that force upon the array $x$ for it to be beautiful? Let $i$ and $j$ be indices of two arbitrary elements of $x$. Then for array $x$ to be beautiful the following needs to hold: $y_i \neq y_j$ $\frac{C}{x_i} \neq \frac{C}{x_j}$, divide both sides by $C$ $\frac{1}{x_i} \neq \frac{1}{x_j}$, as neither $x_i$ nor $x_j$ are $0$, multiply by $x_i \cdot x_j$ $x_j \neq x_i$ Therefore, all values of $x_i$ must be distinct. But is it enough just that all values of $x$ are distinct? It is, and here is how to easily construct the integer array $y$ that satisfies the constraints. Let $P$ be the product of all elements of $x$. Then make $y_i = \frac{P}{x_i}$ for all $1 \le i \le n$. As all values of $x$ are distinct and nonzero, it is guaranteed that all values of $y$ will also be distinct. The solution is the number of distinct values in $a$. How can we count that? We can only count the first appearance of each value. So the problem is now how to check if the appearance of value at some position $i$ is indeed the first appearance of $a_i$. Read the hints. The answer to the problem is the number of distinct elements in the array. There are many ways to find it; we will present one of the easiest. We will count only the first appearances of every value. That means that if array is for example [ $1$, $3$, $1$, $5$, $3$, $1$], we will only count $1$ at position $1$, $3$ at position $2$ and $5$ at position $4$. We will not count values $1$ at positions $3$ and $6$, nor the value $3$ at position $5$, as they are not the first appearance of those values. We will iterate through every element in $x$ and for an element $x_i$, check if there exists some $x_j$ $(j \lt i)$ such that $x_i = x_j$. If we can't find such $j$, then that means it's the first appearance of $x_i$. We can do it by having one loop iterate over $i$, and it will have a variable that says whether it is the first appearance, originally set to true. Then we will iterate in another loop over all indices $1 \le j < i$ and if at any point $a_i = a_j$, we will set the value of the variable to false. Then we increase the count of different values if the variable is still true. Memory complexity is $O(n)$. Time complexity is $O(n^2)$ per testcase giving us a total time complexity of $O(tn^2)$. Come up with more ways to count the number of different elements in the array.
|
[
"brute force",
"greedy",
"implementation",
"math"
] | 800
|
t = int(input())
for i in range(t):
n = int(input())
a = input().split(" ")
count = 0
for i in range(0, n):
add = 1
for j in range(0, i):
if a[i]==a[j]:
add = 0
count += add
print(count)
|
2103
|
B
|
Binary Typewriter
|
You are given a binary string $s$ of length $n$ and a typewriter with two buttons: 0 and 1. Initially, your finger is on the button 0. You can do the following two operations:
- Press the button your finger is currently on. This will type out the character that is on the button.
- Move your finger to the other button. If your finger is on button 0, move it to button 1, and vice versa.
The cost of a binary string is defined as the minimum number of operations needed to type the entire string.
Before typing, you may reverse at most one substring$^{\text{∗}}$ of $s$. More formally, you can choose two indices $1\le l\le r\le n$, and reverse the substring $s_{l\ldots r}$, resulting in the new string $s_1s_2\ldots s_{l-1}s_rs_{r-1}\ldots s_ls_{r+1}\ldots s_n$.
Your task is to find the minimum possible cost among all strings obtainable by performing at most one substring reversal on $s$.
\begin{footnotesize}
$^{\text{∗}}$A string $a$ is a substring of a string $b$ if $a$ can be obtained from $b$ by the deletion of several (possibly, zero or all) characters from the beginning and several (possibly, zero or all) characters from the end.
\end{footnotesize}
|
Given the string $s$, how do we calculate the answer? We know that we need to type a character (apply operation $1$ exactly $n$ times). How many switches do we do? If the string $s$ starts with a 1, we need to do a switch immediately. From that point on, we only switch if the current number in the string changes. We can assume that we start with a 0, as the other case can be handled by appending 0 to the start and subtracting $1$ from the final answer. We will assume that we only want to make the answer smaller, as for keeping the answer, we can pick any interval of size $1$. How does reversing some interval [ $l$, $r$ ] change the answer? Notice that all switches before $l-1$, after $r+1$ and inbetween $l$ and $r$ will stay. So we only care about the values of $s_{l-1}$, $s_l$, $s_r$ and $s_{r+1}$. Notice that if $s_l = s_r$, then the swap does not change the number of changes. So we can assume that $s_l \neq s_r$ in the optimal swap. Now, to evaluate how good the swap is, we can observe the following: if $s_{l-1} = s_l$, the number of changes increases by $1$, otherwise it decreases by $1$. if $s_{r} = s_{r+1}$, the number of changes increases by $1$, otherwise it decreases by $1$. Those 2 changes stack. From the previous hint, we can conclude that we can decrease the number of changes by at most $2$. For us to be able to decrease the number of changes by $2$, it shall hold that $s_{l-1} \neq s_l$ and $s_r \neq s_{r+1}$. If we want to decrease the number of changes by $1$, it is enough to make $s_{l-1} \neq s_l$ and to make $r=n$. If we cannot do either of the previous two, the number of changes cannot be decreased. Read the hints. As said in the hints, we will assume that string $s$ starts with a 0. Now we need to check if it is possible to pick indices $l$ and $r$ such that the number of changes after the swap decreases by $2$. With some casework, we can see that it is always possible if the original number of changes is at least $3$. If the original number of changes is $2$, we cannot decrease it by $2$, but we can decrease it by $1$. If the original number of changes is $0$ or $1$, we cannot decrease it. Counting the number of changes can be done in a single pass of a loop. The time and memory complexities are $O(n)$ per testcase. Solve the problem if you were given $n$ strings. You will swap their order however you want and then concatenate them all into one string $s$. What is the minimal number of operations needed to write $s$ obtained in that way?
|
[
"greedy",
"math"
] | 1,100
|
for _ in range(int(input())):
n = int(input())
a = input().strip()
cnt = 0
for i in range(n-1):
if a[i] != a[i+1]: cnt += 1
if cnt == 0:
print(n + int(a[0] == "1"))
elif cnt == 1:
print(n + 1)
else:
print(n+cnt-1 - int(a[0] == "0" and cnt > 2))
|
2103
|
C
|
Median Splits
|
The median of an array $b_1, b_2, \ldots b_m$, written as $\operatorname{med}(b_1, b_2, \ldots, b_m)$, is the $\left\lceil \frac{m}{2} \right\rceil$-th$^{\text{∗}}$ smallest element of array $b$.
You are given an array of integers $a_1, a_2, \ldots, a_n$ and an integer $k$. You need to determine whether there exists a pair of indices $1 \le l < r < n$ such that:
$$\operatorname{med}(\operatorname{med}(a_1, a_2, \ldots, a_l), \operatorname{med}(a_{l+1}, a_{l+2}, \ldots, a_r), \operatorname{med}(a_{r+1}, a_{r+2}, \ldots, a_n)) \le k.$$
In other words, determine whether it is possible to split the array into three contiguous subarrays$^{\text{†}}$ such that the median of the three subarray medians is less than or equal to $k$.
\begin{footnotesize}
$^{\text{∗}}$$\lceil x \rceil$ is the ceiling function which returns the least integer greater than or equal to $x$.
$^{\text{†}}$An array $x$ is a subarray of an array $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
\end{footnotesize}
|
We split our array into $3$ subarrays, take their medians, and the final median needs to be $\le k$. That means that the median of at least $2$ of those subarrays needs to be $\le k$. For a given array $x$ of length $m$, what is the condition that $\operatorname{med}(x_1, x_2, \ldots, x_m) \le k$? There need to be at least as many elements in $x$ that are $\le k$ as those that are $>k$. How can we simplify this? We can replace all elements that are $\le k$ with $1$ and all elementsthat are $> k$ with $-1$. Now the median of $x$ is $\le k$ if and only if the sum of elements in $x$ is non-negative. Now that we replaced the elements of $a$, the question is if it is possible to split it into $3$ subarrays, of which at least $2$ have a non-negative sum. There are $3$ cases: The first and second subarrays have non-negative sums. The second and third subarrays have non-negative sums. The first and third subarrays have non-negative sums. Use prefix/suffix sums and maximums/ minimums to help you compute the answer in linear time. Read the hints. Cases $1$ and $2$ from hint 5 are symmetric, so we will only explain how to do case $1$. Compute the prefix sum of the array, let it be array $p$. The subarray [$l$, $r$] has non-negative sum if $p_{l-1} \le p_{r}$. For each index $2 \le i < n$ compute value $msp$ (max suffix prefix), such that $msp_i = max(p_i, p_{i+1},\ldots, p_{n-1})$. Then iterate over index $1 \le i \le n-2$ that will be the end of the first subarray. If $p_i \ge 0$ and $msp_{i+1} \ge p_i$, then it will be possible to split the array such that the medians of the first and second subarrays are $\le k$. Case $3$ is even easier. Find positions $x$ and $y$ such that [ $1$ , $x$ ] is the shortest prefix with median $\le k$ and [ $y$, $n$ ] is the shortest suffix with median $\le k$. Then the split into three subarrays such that the first and third have median $\le k$ is possible if and only if [ $x+1$ , $y-1$ ] is a valid subarray, in other words, if $x + 2 \le y$. As both cases can be checked with a few array passes, the time and memory complexities are $O(n)$ per testcase. Solve the problem if you need to find the smallest $k$ for which the answer is YES.
|
[
"binary search",
"greedy",
"implementation",
"sortings"
] | 1,600
|
def check_prefix_and_middle(n, arr):
suf = [0] * (n + 1)
minsuf = [0] * (n + 1)
suf[n] = minsuf[n] = arr[n - 1]
for i in range(n - 2, -1, -1):
suf[i + 1] = suf[i + 2] + arr[i]
minsuf[i + 1] = min(minsuf[i + 2], suf[i + 1])
s = 0
for i in range(n - 2):
s += arr[i]
if s < 0:
continue
if suf[i + 2] >= minsuf[i + 3]:
return True
return False
def solve():
n, k = map(int, input().split())
arr = list(map(int, input().split()))
# Transform the array based on the value of k
for i in range(n):
arr[i] = 1 if arr[i] <= k else -1
a, b = n + 1, -1
s = 0
for i in range(n):
s += arr[i]
if s >= 0:
a = i + 1
break
s = 0
for i in range(n - 1, -1, -1):
s += arr[i]
if s >= 0:
b = i + 1
break
if a + 1 < b:
print("YES")
return
if check_prefix_and_middle(n, arr):
print("YES")
return
arr.reverse()
if check_prefix_and_middle(n, arr):
print("YES")
return
print("NO")
t = int(input())
for _ in range(t):
solve()
|
2103
|
D
|
Local Construction
|
An element $b_i$ ($1\le i\le m$) in an array $b_1, b_2, \ldots, b_m$ is a local minimum if at least one of the following holds:
- $2\le i\le m - 1$ and $b_i < b_{i - 1}$ and $b_i < b_{i + 1}$, or
- $i = 1$ and $b_1 < b_2$, or
- $i = m$ and $b_m < b_{m - 1}$.
Similarly, an element $b_i$ ($1\le i\le m$) in an array $b_1, b_2, \ldots, b_m$ is a local maximum if at least one of the following holds:
- $2\le i\le m - 1$ and $b_i > b_{i - 1}$ and $b_i > b_{i + 1}$, or
- $i = 1$ and $b_1 > b_2$, or
- $i = m$ and $b_m > b_{m - 1}$.
Note that local minima and maxima are not defined for arrays with only one element.
There is a hidden permutation$^{\text{∗}}$ $p$ of length $n$. The following two operations are applied to permutation $p$ alternately, starting from operation 1, until there is only one element left in $p$:
- \textbf{Operation 1} — remove all elements of $p$ which are \textbf{not} local minima.
- \textbf{Operation 2} — remove all elements of $p$ which are \textbf{not} local maxima.
More specifically, operation 1 is applied during every odd iteration, and operation 2 is applied during every even iteration, until there is only one element left in $p$.
For each index $i$ ($1\le i\le n$), let $a_i$ be the iteration number that element $p_i$ is removed, or $-1$ if it was never removed.
It can be proven that there will be only one element left in $p$ after at most $\lceil \log_2 n\rceil$ iterations (in other words, $a_i \le \lceil \log_2 n\rceil$).
You are given the array $a_1, a_2, \ldots, a_n$. Your task is to construct any permutation $p$ of $n$ elements that satisfies array $a$.
\begin{footnotesize}
$^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
\end{footnotesize}
|
Try to prove the claim that $a_i \le \lceil \log_2 n\rceil$. It also shall help you understand when the construction is possible, which is often a good way to think about constructive problems. We can notice that among any two consecutive elements $p_i$ and $p_{i+1}$, one of them has to be deleted. That means that after each operation, around half of the elements are deleted, therefore giving us the bound $a_i \le \lceil \log_2 n\rceil$. It also means that for array $a$ to describe a possible permutation, it must not contain two consecutive elements on any layer that do not get deleted in next iteration. The whole function on going to next layer is defined in recursive way. Also, WLOG we can assume that local maximums stay, as it is symmetrical for local minimums. It makes sense to think about how to arrange elements on current layer and select those that do and those that do not get deleted. Then we will apply recursive step on next layer to arrange those that do not get deleted. But how can we guarantee elements at some position will not get deleted? If there are $k$ positions at which elements that should not be deleted go, then we can just select the $k$ biggest elements to go there and the rest of the array we fill in with smaller ones. But how do we guarantee that one of the smaller ones does not become a local maximum? We can sort the smaller elements in increasing/ decreasing order, so none of them will be local maximum. However, there is an edge case of the smallest elements that form a prefix/ suffix. The part of smallest elements that form a prefix must be sorted in increasing order and the part of smallest elements that form a suffix must be sorted in decreasing order. Otherwise we might get that elements at first or last positions are local maximum when we do not want them to be. Read the hints. We process each layer separately and WLOG assume that we delete those that are not local maximums. We will have some elements that go onto next layer, and we will get their order when we process the next layer, for now we just know their positions. Say exactly $k$ elements are going to next layer, then we can select them to be the largest $k$ elements, which guarantees they will be local maximums. Now the question is how to place smaller elements. We can imagine that bigger elements divide the layer into subarrays of indices that will be deleted. It does not matter how we fill in those subarrays with smaller elements, as long as they are in sorted order. If the subarray is the prefix of a layer, it must be sorted increasingly, and if it is the suffix of the layer, it must be sorted decreasingly. Depending on implementation time complexity is either $O(n)$ or $O(n log n)$, both of which are fast enough to pass. Memory complexity is $O(n)$.
|
[
"constructive algorithms",
"dfs and similar",
"implementation",
"two pointers"
] | 2,000
|
#include <algorithm>
#include <bitset>
#include <cassert>
#include <chrono>
#include <cmath>
#include <deque>
#include <iomanip>
#include <iostream>
#include <map>
#include <queue>
#include <random>
#include <set>
#include <string>
#include <vector>
typedef long long ll;
typedef long double ld;
using namespace std;
int main()
{
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--)
{
int n;
cin >> n;
vector<int> a(n), p(n);
for (int i = 0; i < n; i++) cin >> a[i];
int mx = max(0, *max_element(a.begin(), a.end())) + 1;
for (int i = 0; i < n; i++) if (a[i] == -1) a[i] = mx;
int l = 1, r = n;
for (int k = 1; k <= mx; k++)
{
int mr = n - 1;
while (mr >= 0 && a[mr] <= k) mr--;
for (int i = 0; i < mr; i++)
{
if (a[i] == k)
{
p[i] = ((k & 1) ? r-- : l++);
}
}
for (int i = n - 1; i > mr; i--)
{
if (a[i] == k)
{
p[i] = ((k & 1) ? r-- : l++);
}
}
}
for (int i = 0; i < n; i++) cout << p[i] << " \n"[i == n - 1];
}
return 0;
}
|
2103
|
E
|
Keep the Sum
|
You are given an integer $k$ and an array $a$ of length $n$, where each element satisfies $0 \le a_i \le k$ for all $1 \le i \le n$. You can perform the following operation on the array:
- Choose two distinct indices $i$ and $j$ ($1 \le i,j \le n$ and $i \neq j$) such that $a_i + a_j = k$.
- Select an integer $x$ satisfying $-a_j \le x \le a_i$.
- Decrease $a_i$ by $x$ and increase $a_j$ by $x$. In other words, update $a_i := a_i - x$ and $a_j := a_j + x$.
Note that the constraints on $x$ ensure that all elements of array $a$ remain between $0$ and $k$ throughout the operations.
Your task is to determine whether it is possible to make the array $a$ non-decreasing$^{\text{∗}}$ using the above operation. If it is possible, find a sequence of at most $3n$ operations that transforms the array into a non-decreasing one.
It can be proven that if it is possible to make the array non-decreasing using the above operation, there exists a solution that uses at most $3n$ operations.
\begin{footnotesize}
$^{\text{∗}}$ An array $a_1, a_2, \ldots, a_n$ is said to be non-decreasing if for all $1 \le i \le n - 1$, it holds that $a_i \le a_{i+1}$.
\end{footnotesize}
|
When is the solution trivially -1? When it is impossible to do any operation and array is not sorted in the beginning. Is that enough though? It is enough. If we have even a single pair of values that sums up to $k$, we can make the array non-decreasing. From now assume there is only one such pair as we can ignore other pairs. The constraint of $3n$ hints that a solution is linear. Maybe we are actually supposed to sort the array? Let's assume indices of values that sum up to $k$ are $a$ and $b$. How do we swap values at some other two indices $c$ and $d$. We can do the following sequence of operations. It could change values of elements at positions $a$ and $b$, but their sum will remain $k$ and values at indices $c$ and $d$ will be swapped. We do operation on indices $a$ and $b$ and we make it so that $value[a] := k - value[c]$ and $value[b] := value[c]$. We do operation on indices $a$ and $c$ and we make it so that $value[a] := k - value[d]$ and $value[c] := value[d]$. We do operation on indices $a$ and $d$ and we make it so that $value[a] := k - value[b]$ and $value[d] := value[b]$. Notice that after this sequence of operations we swapped values at positions $c$ and $d$ and $value[a] + value[b]$ is still $k$. We can now sort the rest of the array (without $a$ and $b$) and then adjust values of $a$ and $b$ at the end. However, there is still a problem. It might be impossible to adjust values of $a$ and $b$. Take $k=5$ and array [$1$, $3$, $2$, $3$, $5$] for example, with $a = 3$ and $b = 4$. How do we avoid such problems? If $a = 1$ and $b = n$ we can just make $value[a]=0$ and $value[b] = k$ and as all other elements are between $0$ and $k$, it is guaranteed that array is sorted. How do we do it? We will assume that both $a$ and $b$ are neither $1$ nor $n$. If they are, then we will skip some steps. The following sequence of operations makes $a=1$ and $b=n$ a valid choice. Note that $a$ and $b$ are referring to their original indices. We do operation on indices $a$ and $b$ and we make it so that $value[a] := k - value[1]$ and $value[b] := value[1]$. We do operation on indices $1$ and $a$ and we make it so that $value[1] := k - value[n]$ and $value[a] := value[n]$. After those two operations we can just sort the rest of the array with sequence of $3$ operations previously described. Then at the end, we do operation to make $value[1] = 0$ and $value[n] = k$. Due to $n>4$ in total we use a bit less than $3n$ operations in the worst case. Solve the problem in $\lceil \frac{3n}{2} \rceil$ queries.
|
[
"constructive algorithms",
"implementation",
"two pointers"
] | 2,600
|
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int maxn=2e5+5;
int n,k,arr[maxn],A,B,p[maxn];
map<int,int> pos;
vector<tuple<int,int,int>> V; /// All operations
void dooperation(int a,int b,int x){
arr[a]-=x;
arr[b]+=x;
V.push_back(make_tuple(a,b,x));
}
void setvalue(int pos,int b,int target){ /// Does operation on (pos, b) such that value of pos becomes target
if(arr[pos]==target)
return;
int dif=target-arr[pos];
dooperation(b,pos,dif);
}
void performswap(int x,int y){
int vx=arr[x],vy=arr[y];
setvalue(A,B,k-vx);
setvalue(x,A,vy);
setvalue(y,A,vx);
}
bool cmp(int a,int b){
return arr[a]<arr[b];
}
void solve(){
pos.clear();
V.clear();
A=B=-1;
cin>>n>>k;
for(int i=1;i<=n;i++)
cin>>arr[i];
bool issorted=true;
for(int i=2;i<=n;i++)
if(arr[i]<arr[i-1])
issorted=false;
if(issorted){
cout<<"0\n";
return;
}
for(int i=1;i<=n;i++){
if(pos.find(k-arr[i])!=pos.end()){
B=i;
A=pos[k-arr[i]];
break;
}
pos[arr[i]]=i;
}
if(A==-1){
cout<<"-1\n";
return;
}
if(A!=1){
setvalue(A,B,k-arr[1]);
setvalue(1,A,k-arr[B]);
A=1;
}
if(B!=n){
setvalue(B,A,k-arr[n]);
setvalue(n,B,k-arr[A]);
B=n;
}
vector<int> invp;
for(int i=2;i<n;i++)
invp.push_back(i);
sort(invp.begin(),invp.end(),cmp);
for(int i=0;i<invp.size();i++)
p[invp[i]]=i+2;
for(int i=2;i<n;i++){
if(p[i]==i)
continue;
performswap(p[i],i);
swap(p[p[i]],p[i]);
i--;
}
setvalue(B,A,k);
cout<<V.size()<<"\n";
for(auto x:V)
cout<<(get<0>(x))<<" "<<(get<1>(x))<<" "<<(get<2>(x))<<"\n";
return;
}
int main(){
int T;
cin>>T;
while(T--)
solve();
return 0;
}
|
2103
|
F
|
Maximize Nor
|
The bitwise nor$^{\text{∗}}$ of an array of $k$-bit integers $b_1, b_2, \ldots, b_m$ can be computed by calculating the bitwise nor cumulatively from left to right. More formally, $\operatorname{nor}(b_1, b_2, \ldots, b_m) = \operatorname{nor}(\operatorname{nor}(b_1, b_2, \ldots, b_{m - 1}), b_m)$ for $m\ge 2$, and $\operatorname{nor}(b_1) = b_1$.
You are given an array of $k$-bit integers $a_1, a_2, \ldots, a_n$. For each index $i$ ($1\le i\le n$), find the maximum bitwise nor among all subarrays$^{\text{†}}$ of $a$ containing index $i$. In other words, for each index $i$, find the maximum value of $\operatorname{nor}(a_l, a_{l+1}, \ldots, a_r)$ among all $1 \le l \le i \le r \le n$.
\begin{footnotesize}
$^{\text{∗}}$ The logical nor of two boolean values is $1$ if both values are $0$, and $0$ otherwise. The bitwise nor of two $k$-bit integers is calculated by performing the logical nor operation on each pair of the corresponding bits.
For example, let us compute $\operatorname{nor}(2, 6)$ when they are represented as $4$-bit numbers. In binary, $2$=$0010_2$ and $6=0110_2$. Therefore, $\operatorname{nor}(2,6) = 1001_2 = 9$ as by performing the logical nor operations from left to right, we have:
- $\operatorname{nor}(0,0) = 1$
- $\operatorname{nor}(0,1) = 0$
- $\operatorname{nor}(1,0) = 0$
- $\operatorname{nor}(1,1) = 0$
Note that if $2$ and $6$ were represented as $3$-bit integers instead, then $\operatorname{nor}(2,6) = 1$.
$^{\text{†}}$An array $x$ is a subarray of an array $y$ if $x$ can be obtained from $y$ by the deletion of several (possibly, zero or all) elements from the beginning and several (possibly, zero or all) elements from the end.
\end{footnotesize}
|
If problem had queries of form "What is nor of some interval [$l$, $r$]", how would you answer them? You can look at all the bits separately and answer for each. For each bit, we only care about the position of the last $1$ on it before $r$. Let that position be $x$. If $x>l$, then if $x$ and $r$ are of the same parity, the bit will be $0$; otherwise, it will be $1$. If $x=l$, the bit will be $1$ if $l$ and $r$ are the same parity and otherwise $0$. If $x<l$, the bit will be $1$ if $l$ and $r$ are different parity and otherwise $0$. From the previous hint we conclude that for a fixed $r$, the values of interval [$l$, $r$] repeat when you increase $l$ by $2$, except at positions near where last $1$ is for some bit. That means there will be only $O(k)$ different values of nor of an interval ending at some $r$. Read the hints. We can go from beginning to the end of the array and keep all the possible nor values of intervals ending at $r$ in some vector. For each value we will also remember the highest position $l$ for which we can obtain such value. When we find all those intervals for some $r$, we use segment tree to set answer to all positions in this interval to max of current answer and nor of our interval. Time complexity is $O(nk^2 + nk log n)$ and memory complexity is $O(n)$. Nor and Nand are well known for being able to mimic any bitwise operation if applied enough times. Make a circuit to add three $1$-bit numbers (those are zero and one) using $9$ NAND gates.
|
[
"bitmasks",
"data structures",
"dp",
"implementation",
"sortings"
] | 2,600
|
#include<bits/stdc++.h>
#define ll long long
using namespace std;
const int maxn=2e5+5;
int N,M,arr[maxn],pref[maxn][25],logg,st[4*maxn],stk;
int nor(int a,int b){
return M^(a|b);
}
int getlog(int x){
int ans=-1;
while(x){
ans+=1;
x/=2;
}
return ans;
}
int internor(int l,int r){
if(l<=0 or l>r)
return 0;
if(l==r)
return arr[r];
int ans=0;
for(int j=0;j<=logg;j++){
int bit=(1<<j);
if(pref[r][j]<l)
if((r-l+1)%2==1)
bit=0;
if(pref[r][j]==l)
if((r-l+1)%2==0)
bit=0;
if(pref[r][j]>l)
if((r-pref[r][j]+1)%2==1)
bit=0;
ans+=bit;
}
return ans;
}
void update(int lb,int rb,int x,int pos=1,int l=1,int r=stk){
lb=max(1,lb);
rb=min(N,rb);
if(rb<lb)
return;
if(l==lb and r==rb){
st[pos]=max(st[pos],x);
return;
}
int mid=(l+r)/2;
if(rb<=mid)
return update(lb,rb,x,pos*2,l,mid);
if(lb>mid)
return update(lb,rb,x,pos*2+1,mid+1,r);
update(lb,mid,x,pos*2,l,mid);
update(mid+1,rb,x,pos*2+1,mid+1,r);
return;
}
int get(int pos){
pos+=stk-1;
int v=0;
while(pos){
v=max(v,st[pos]);
pos/=2;
}
return v;
}
void solve(){
int K;
cin>>N>>K;
stk=1;
while(stk<N)
stk<<=1;
M=(1<<K)-1;
for(int i=1;i<=N;i++)
cin>>arr[i];
logg=getlog(M+1)-1;
for(int i=1;i<=N;i++){
for(int j=0;j<=logg;j++)
if(arr[i]&(1<<j))
pref[i][j]=i;
else
pref[i][j]=pref[i-1][j];
}
for(int i=1;i<=N;i++){
for(int j=0;j<=logg;j++){
int p=pref[i][j];
for(int add=-2;add<=2;add++)
update(p+add,i,internor(p+add,i));
}
update(1,i,internor(1,i));
}
for(int i=1;i<=N;i++)
cout<<get(i)<<" \n"[i==N];
}
void reset(){
for(int i=0;i<=N+2;i++){
arr[i]=0;
for(int j=0;j<=logg+2;j++)
pref[i][j]=0;
}
logg=0;
for(int i=0;i<=stk+stk;i++)
st[i]=0;
}
int main(){
int T=1;
cin>>T;
while(T--){
solve();
reset();
}
return 0;
}
|
2104
|
A
|
Three Decks
|
Monocarp placed three decks of cards in a row on the table. The first deck consists of $a$ cards, the second deck consists of $b$ cards, and the third deck consists of $c$ cards, with the condition $a < b < c$.
Monocarp wants to take some number of cards (at least one, but no more than $c$) from the \textbf{third} deck and distribute them between the first two decks so that each of the taken cards ends up in either the first or the second deck. It is possible that all the cards taken from the third deck will go into the same deck.
Your task is to determine whether Monocarp can make the number of cards in all three decks equal using the described operation.
|
Let us assume that we have managed to make the number of cards equal. Then each deck contains $x$ cards. Since the total number of cards from the hasn't changed, there were $3x$ cards in total. This means that if $a + b + c$ is not divisible by $3$, the answer is definitely "NO". Otherwise, we know that $3x = a + b + c$, which means $x = \frac{a + b + c}{3}$. If the first or the second deck already has more cards than $x$, the answer is also "NO". Otherwise, we can move $x - a$ cards to the first deck and $x - b$ cards to the second deck. Since $b > a$, it's enough to check if $b \le x$. Overall complexity: $O(1)$ per test case.
|
[
"math"
] | 800
|
for _ in range(int(input())):
a, b, c = map(int, input().split())
if (a + b + c) % 3 != 0:
print("NO")
continue
x = (a + b + c) // 3
print("YES" if b <= x else "NO")
|
2104
|
B
|
Move to the End
|
You are given an array $a$ consisting of $n$ integers.
For every integer $k$ from $1$ to $n$, you have to do the following:
- choose an arbitrary element of $a$ and move it to the right end of the array (you can choose the last element, then the array won't change);
- print the sum of $k$ last elements of $a$;
- move the element you've chosen on the first step to its original position (restore the original array $a$).
For every $k$, you choose the element which you move so that the value you print is \textbf{the maximum possible}.
Calculate the value you print for every $k$.
|
If we move an element to the end of the array, then each element (except the one we selected) will either remain in its place or move $1$ position to the left. This means that if we are interested in the sum of the last $k$ elements after we have shifted some element to the end, it will necessarily include the last $(k-1)$ elements of the original array (each of them will move at most $1$ position to the left, so they will still be included in the last $k$ elements). Thus, the answer for each $k$ is the sum of the last $(k-1)$ elements, plus some additional element. Can we use any element? It turns out, yes; if we want to use the element at index $i$, which is not among the last $(k-1)$, we just need to move it to the end. Obviously, among all these elements, we need to take the maximum. Therefore, the answer for each value of $k$ is $\max \limits_{i=1}^{n-k+1} a_i + \sum\limits_{i=n-k+2}^n a_i$. Calculating these values naively is too slow, but we can speed it up as follows: we will build two arrays $psum_i$ and $pmax_i$, where $psum_i$ is the sum of the first $i$ elements of the array, and $pmax_i$ is the maximum of them. Then the answer for $k$ is simply $pmax_{n-k+1} + psum_n - psum_{n-k+1}$. To quickly construct these two arrays, we can use the fact that $psum_i=pmax_i=0$, $psum_{i+1} = psum_i + a_{i+1}$, and $pmax_{i+1} = \max(pmax_i, a_{i+1})$. This way, we will obtain a solution in $O(n)$.
|
[
"brute force",
"data structures",
"dp",
"greedy",
"implementation"
] | 1,000
|
#include<bits/stdc++.h>
using namespace std;
int main()
{
int t;
cin >> t;
for(int i = 0; i < t; i++)
{
int n;
cin >> n;
vector<int> a(n);
for(int j = 0; j < n; j++) cin >> a[j];
vector<int> pmax(n + 1);
vector<long long> psum(n + 1);
for(int j = 0; j < n; j++)
{
pmax[j + 1] = max(pmax[j], a[j]);
psum[j + 1] = psum[j] + a[j];
}
for(int k = 1; k <= n; k++)
cout << pmax[n - k + 1] + psum[n] - psum[n - k + 1] << " ";
cout << endl;
}
}
|
2104
|
C
|
Card Game
|
Alice and Bob are playing a game. They have $n$ cards numbered from $1$ to $n$. At the beginning of the game, some of these cards are given to Alice, and the rest are given to Bob.
Card with number $i$ beats card with number $j$ if and only if $i > j$, \textbf{with one exception}: card $1$ beats card $n$.
The game continues as long as each player has at least one card. During each turn, the following occurs:
- Alice chooses one of her cards and places it face up on the table;
- Bob, seeing Alice's card, chooses one of his cards and places it face up on the table;
- if Alice's card beats Bob's card, both cards are taken by Alice. Otherwise, both cards are taken by Bob.
A player can use a card that they have taken during one of the previous turns.
The player who has no cards at the beginning of a turn loses. Determine who will win if both players play optimally.
|
There are many ways to solve this problem. I will describe one of the ways that doesn't use too much casework. The key observation we need is that, if a player has a strategy that allows him/her to take the cards on turn $i$, then he/she has a strategy to take the cards on turn $i+1$. There are two ways to prove it, choose any of them: the first way is to consider the options each player has. When a player loses a card, they lose one of their options; when a player gains a card, they gain a new option for a turn. So, if a player can't take cards on some turn, then on the next turn, their options will be even more limited, and the opponent will still be able to use all the cards they were able to use on the previous turn; the second way is to consider it for Alice and Bob separately. If Alice can take cards on turn $i$ no matter what Bob does, then she has a card which beats every card Bob has (and she will still have this card on the next turn). Otherwise, if Bob can take cards on turn $i$ no matter what Alice does, then for every card Alice has, Bob has a card that beats it (and that won't change on the next turn). So, if a player can take cards on the first turn, they win. All that's left to check is who wins on the first turn. If Alice has a card that beats every card Bob has, she wins. Otherwise, Bob wins.
|
[
"brute force",
"constructive algorithms",
"games",
"greedy",
"math"
] | 1,100
|
def beats(n, x, y):
if x == 0:
return y == n - 1
if x == n - 1:
return y != 0
return x > y
for _ in range(int(input())):
n = int(input())
owner = input()
good = False
for i in range(n):
if owner[i] != 'A':
continue
good_move = True
for j in range(n):
if owner[j] == 'B' and beats(n, j, i):
good_move = False
if good_move:
good = True
if good:
print('Alice')
else:
print('Bob')
|
2104
|
D
|
Array and GCD
|
You are given an integer array $a$ of size $n$.
You can perform the following operations any number of times (possibly, zero):
- pay one coin and increase any element of the array by $1$ (you must have at least $1$ coin to perform this operation);
- gain one coin and decrease any element of the array by $1$.
Let's say that an array is ideal if both of the following conditions hold:
- each element of the array is at least $2$;
- for each pair of indices $i$ and $j$ ($1 \le i, j \le n$; $i \ne j$) the greatest common divisor (GCD) of $a_i$ and $a_j$ is equal to $1$. If the array has less than $2$ elements, this condition is automatically satisfied.
Let's say that an array is beautiful if it can be transformed into an ideal array using the aforementioned operations, provided that you initially have no coins. If the array is already ideal, then it is also beautiful.
The given array is not necessarily beautiful or ideal. You can remove any elements from it (including removing the entire array or not removing anything at all). Your task is to calculate the minimum number of elements you have to remove (possibly, zero) from the array $a$ to make it \textbf{beautiful}.
|
First, let's understand that operations from the statement impose only one restriction: the sum of the resulting array must not be greater than the sum of the original one. Now let's consider an ideal array. If there is an element that contains at least two distinct prime factors (or its factorization includes some prime more than once), we can reduce the element to any of its factors and the array remains ideal. This means, we can transform any ideal array into another ideal array that consists only of primes. Furthermore, if a prime $p$ is in the array, but another prime $q$, such that $p > q$, is absent, we can reduce $p$ to $q$ without losing the array's ideal property. Therefore, any ideal array of length $n$ can be transformed into the array that consists of the first $n$ primes. Based on the aforementioned facts, an array of length $n$ is beautiful if its sum is at least the sum of the first $n$ prime numbers. To solve the problem, we can iterate over the number of remaining elements $k$. If the sum of $k$ maximums from the array $a$ is at least the sum of $k$ first primes, we can update the answer. It's also useful to note that the sieve of Eratosthenes can efficiently generate the list of the first prime numbers. You need $4 \cdot 10^5$ primes, so you have to use the sieve up to something like $6 \cdot 10^6$.
|
[
"binary search",
"greedy",
"math",
"number theory"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
const int N = 6e6;
int main() {
ios::sync_with_stdio(false); cin.tie(0);
vector<int> p, ip(N, 1);
for (int i = 2; i < N; ++i) {
if (!ip[i]) continue;
p.push_back(i);
for (int j = i; j < N; j += i) {
ip[j] = 0;
}
}
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> a(n);
for (auto& x : a) cin >> x;
sort(a.begin(), a.end(), greater<int>());
int ans = 0;
long long suma = 0, sump = 0;
for (int i = 0; i < n; ++i) {
suma += a[i];
sump += p[i];
if (suma >= sump) ans = i + 1;
}
cout << n - ans << endl;
}
}
|
2104
|
E
|
Unpleasant Strings
|
Let's call a letter allowed if it is a lowercase letter and is one of the first $k$ letters of the Latin alphabet.
You are given a string $s$ of length $n$, consisting only of allowed letters.
Let's call a string $t$ pleasant if $t$ is a subsequence of $s$.
You are given $q$ strings $t_1, t_2, \dots, t_q$. All of them consist only of allowed letters. For each string $t_i$, calculate the minimum number of allowed letters you need to append to it on the right so that it \textbf{stops} being pleasant.
A sequence $t$ is a subsequence of a sequence $s$ if $t$ can be obtained from $s$ by the deletion of several (possibly, zero or all) element from arbitrary positions.
|
For a start, let's think about how to check, is $t$ a subsequence of $s$. There is a greedy solution to that: let's match $t_0$ with the leftmost character in $s$ that is equal to it. Then match $t_1$ with the next leftmost character in $s$ that goes after the first one, and so on. If we matched all characters in $t$, then $t$ is a subsequence of $s$. One of the common ways to write the algorithm above is to count array $\mathrm{nxt}[n][k]$ where $\mathrm{nxt}[i][c]$ is the position of the next occurrence of character $c$ starting from position $i$. Array $\mathrm{nxt}$ can be calculated in linear time, since $\mathrm{nxt}[i]$ differs from $\mathrm{nxt}[i + 1]$ in only one position. Then we can check the string $t$ in $\mathcal{O}(|t|)$ time, just jumping from current position $p$ (in $s$) to $\mathrm{nxt}[p][t_i]$ until either $t$ ends or $\mathrm{nxt}$ "doesn't exist" (we can set such $\mathrm{nxt}$ equal to $n$ (remember, zero indexation)). So, how to add the minimum numbers of characters to string $t_i$ to make it unpleasant? It's equivalent to making the minimum number of extra jumps in $\mathrm{nxt}$ array to reach $\mathrm{nxt}[p][c] = n$. Suppose, we applied the algorithm above on $t_i$ and finished at position $p$. Now, we are choosing which character to add. It's the same as choosing which $\mathrm{nxt}[p + 1][c]$ to choose as the next jump. It's not hard to prove that it's optimal to choose $\mathrm{nxt}[p + 1][c]$ with the maximum value (since it won't increase the answer). After that, we jump to $p' = \mathrm{nxt}[p + 1][c]$ and the procedure repeats. When $p$ becomes bigger or equal to $n$, we've reached the goal. In other words, the answer depends only on the "starting" position $p$. So, we can calculate all of them as a dp array $d[n]$, where $d[i]$ is the minimum number of jumps needed. Then $d[i]$ is equal to $1 + d[\max{(\mathrm{nxt}[i + 1][c])}]$ ($i + 1$ here is to exclude the situation where we match the same character in $s$ with several characters from $t$). In total, we precalculate arrays $\mathrm{nxt}$ and $d$. Then for each $t_i$ we match it in $O(|t_i|)$ and print the value of $d[p]$. The total complexity is $\mathcal{O}(nk + \sum{|t_i|})$.
|
[
"binary search",
"dp",
"greedy",
"strings"
] | 1,700
|
#include<bits/stdc++.h>
using namespace std;
#define fore(i, l, r) for(int i = int(l); i < int(r); i++)
#define sz(a) int((a).size())
#define x first
#define y second
typedef long long li;
typedef long double ld;
typedef pair<int, int> pt;
template<class A, class B> ostream& operator <<(ostream& out, const pair<A, B> &p) {
return out << "(" << p.x << ", " << p.y << ")";
}
template<class A> ostream& operator <<(ostream& out, const vector<A> &v) {
fore(i, 0, sz(v)) {
if(i) out << " ";
out << v[i];
}
return out;
}
const int INF = int(1e9);
const li INF64 = li(1e18);
const ld EPS = 1e-9;
int n, k;
string s;
inline bool read() {
if(!(cin >> n >> k))
return false;
cin >> s;
return true;
}
inline void solve() {
vector<int> d(n + 1, 0);
vector<vector<int>> nxt(n + 2, vector<int>(k, n));
for (int i = n - 1; i >= 0; i--) {
nxt[i] = nxt[i + 1];
int mx = *max_element(nxt[i].begin(), nxt[i].end());
d[i] = 1 + d[mx];
nxt[i][s[i] - 'a'] = i;
}
int q; cin >> q;
while (q--) {
string t; cin >> t;
int pos = -1;
for (char c : t)
pos = nxt[pos + 1][c - 'a'];
cout << d[pos] << "\n";
}
}
int main() {
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
int tt = clock();
#endif
ios_base::sync_with_stdio(false);
cin.tie(0), cout.tie(0);
cout << fixed << setprecision(15);
if(read()) {
solve();
#ifdef _DEBUG
cerr << "TIME = " << clock() - tt << endl;
tt = clock();
#endif
}
return 0;
}
|
2104
|
F
|
Numbers and Strings
|
For each integer $x$ from $1$ to $n$, we will form the string $S(x)$ according to the following rules:
- compute $(x+1)$;
- write $x$ and $x+1$ next to each other in the decimal system without separators and leading zeros;
- in the resulting string, sort all digits in non-decreasing order.
For example, the string $S(139)$ is 011349 (before sorting the digits, it is 139140). The string $S(99)$ is 00199.
Your task is to count the number of distinct strings among $S(1), S(2), \dots, S(n)$.
|
Consider how the decimal representation of $(x+1)$ depends on $x$. Usually, all digits of $(x+1)$ remain the same, except for the last digit, which is increased by $1$. It does not actually work if the last digit of $x$ is $9$, but we can get a more general rule that handles that case: all $9$'s at the end of the number change to $0$'s, and then the digit before that block of $9$'s gets increased by $1$. Let's split each integer into three parts: the last block of $9$'s (possibly empty), the digit before that block of $9$'s, and all digits before that (possibly none). For example, the number $133799$ is split as follows: $[133, 7, 99]$. The first part of the integer will not be affected if we increase the integer by $1$. So, the order of digits in the first part does not matter: $133799$ gives the same string $S(x)$ as $331799$, since $133$ is a permutation of $331$. So, if there are two integers for which the "middle" and the "right" part are the same, and the left parts are permutations of each other, the strings $S(x)$ are the same for them. Among several numbers having the same $S(x)$, we are interested only in the smallest number. Let's brute force all numbers such that their left parts are sorted in non-descending order. So, if some digit of the number is less than the previous one, then this digit is the "middle" part and every digit to the right of it should be $9$. In my opinion, the easiest way to do this is to run a recursive function like rec(cur, flag), where cur is the current decimal representation of the number, and flag denotes whether we have already built the left and the middle part of the number. If flag is false, we can append any digit to the end of the number (and if it is less than the last digit of cur, we set flag to true); otherwise, we can only append $9$. This may cause our integers to have leading zeroes, but I have an easy fix for this: when converting the decimal representation of the integer to the integer itself, if it has leading zeroes, swap the first digit with the first non-zero digit. Unfortunately, this brute force approach may generate multiple integers having the same $S(x)$, since, for example, $199$ and $901$ have the same $S(x)$, but their left parts are not permutations of each other. So we need to filter the list of numbers we get: if multiple numbers give the same $S(x)$, get rid of all of them except for the minimum one. The model solution does this by storing pairs $(S(x), x)$ and sorting them. After we've filtered the numbers, we can sort them again in non-descending order and use binary search to answer the test cases. The only thing that's left to discuss is why this works fast. We can estimate the number of integers our search will consider as follows: each integer consists of the left part, the digit after it, and the block of $9$'s. The length of the integer is at most $9$, so the length of the left part is at most $8$. The number of possible left parts of length $k$ is ${{k+9}\choose{k}}$, since constructing the integer of length $k$ with ascending digits can be represetned as a partition of $k$ into $10$ non-negative summands. If the length of the left part is $8$, the number of possible left parts is $24310$, the number of ways to choose the middle part is $9$. This is clearly the largest group of numbers we are interested in, but if you want to check the other groups as well, you can sum them up as follows: $\sum\limits_{k=0}^{8} {{k+9}\choose{k}} \cdot 9 \cdot (9-k)$ $(9-k)$ here represents the number of ways to choose the length for the block of $9$'s. Evaluating this formula gives that a bit less than $700000$ integers are considered by our search.
|
[
"binary search",
"brute force",
"dfs and similar",
"dp",
"implementation",
"math"
] | 2,600
|
#include<bits/stdc++.h>
using namespace std;
const long long A = (long long)(1e18);
string S(long long x)
{
string s = to_string(x) + to_string(x + 1);
sort(s.begin(), s.end());
return s;
}
vector<long long> aux2;
vector<pair<string, long long>> aux;
long long get_num(string cur)
{
int first_non_zero = 0;
while(cur[first_non_zero] == '0') first_non_zero++;
swap(cur[first_non_zero], cur[0]);
return stoll(cur);
}
void rec(string cur, bool flag)
{
if(*max_element(cur.begin(), cur.end()) > '0')
{
long long x = get_num(cur);
aux.push_back(make_pair(S(x), x));
}
if(cur.size() < 9)
{
if(flag)
rec(cur + "9", true);
else
for(char c = '0'; c <= '9'; c++)
rec(cur + string(1, c), c < cur.back());
}
}
void precalc()
{
for(char c = '0'; c <= '9'; c++)
rec(string(1, c), false);
sort(aux.begin(), aux.end());
for(int i = 0; i < aux.size(); i++)
if(i == 0 || aux[i].first != aux[i - 1].first)
aux2.push_back(aux[i].second);
sort(aux2.begin(), aux2.end());
}
int main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
int t;
cin >> t;
precalc();
for(int i = 0; i < t; i++)
{
long long n;
cin >> n;
cout << upper_bound(aux2.begin(), aux2.end(), n) - aux2.begin() << endl;
}
}
|
2104
|
G
|
Modulo 3
|
Surely, you have seen problems which require you to output the answer modulo $10^9+7$, $10^9+9$, or $998244353$. But have you ever seen a problem where you have to print the answer modulo $3$?
You are given a functional graph consisting of $n$ vertices, numbered from $1$ to $n$. It is a directed graph, in which each vertex has exactly one outgoing arc. The graph is given as the array $g_1, g_2, \dots, g_n$, where $g_i$ means that there is an arc that goes from $i$ to $g_i$. For some vertices, the outgoing arcs might be self-loops.
Initially, all vertices of the graph are colored in color $1$. You can perform the following operation: select a vertex and a color from $1$ to $k$, and then color this vertex and all vertices that are reachable from it. You can perform this operation any number of times (even zero).
You should process $q$ queries. The query is described by three integers $x$, $y$ and $k$. For each query, you should:
- assign $g_x := y$;
- then calculate the number of different graph colorings for the given value of $k$ (two colorings are different if there exists at least one vertex that is colored in different colors in these two colorings); since the answer can be very large, print it \textbf{modulo $3$}.
Note that in every query, the initial coloring of the graph is reset (all vertices initially have color $1$ in each query).
|
First, let's find out how to solve the following problem: you are given a directed graph where all vertices have color $1$. You can choose a vertex and a color from $1$ to $k$, then color it and all vertices reachable from it with that color. How many different colorings are there? Clearly, if a pair of vertices is in the same strongly connected component, they will always have the same color. Furthermore, if we fix a color for each strongly connected component, there is always a way to color the graph such that the colors for all components match the chosen ones. To do this, we can condense the graph and color it in topological order of the condensation. Therefore, the answer for an arbitrary directed graph is $k^c$, where $c$ is the number of strongly connected components. Now let's return to the original problem and try to utilize the fact that the answer is required modulo $3$ (this is probably an important constraint). If $k \bmod 3 = 0$ or $k \bmod 3 = 1$, then any power of $k$ is equal to $k$ itself. Therefore, the only complex case is when $k \bmod 3 = 2$. For $k \bmod 3 = 2$, even powers are equal to $1$ modulo $3$, while odd powers are equal to $2$. Thus, we are actually interested in the parity of the number of strongly connected components. Let's try to understand how to conveniently count the parity of the number of SCCs for a functional graph. Vertices that are not on cycles represent separate components, and each cycle is a separate component. If a cycle has even length, it changes the parity of the number of SCCs, while if it has odd length, it leaves it unchanged. Therefore, we are actually interested in the number of cycles of even length in the functional graph. If instead of a directed graph, we treat it as an undirected one (making each edge bidirectional), the cycles will not change. We are interested in the number of even cycles, and since there will be a cycle in each connected component, we are interested in the number of connected components without odd cycles. In other words, we need to maintain the number of bipartite connected components in the undirected graph. The further part of the solution is purely technical. Let's use the Dynamic Connectivity Offline method: For each edge, find all time segments during which it exists. Build a segment tree, where each leaf represents a moment in time (the $1$-st leaf is the $1$-st query, the $2$-nd leaf is the $2$-nd query, and so on). For each existence segment of an edge, do the following: break it into $O(\log q)$ vertices of the segment tree (as in any segment query) and add information to each corresponding vertex that this edge exists throughout this segment. Implement a Disjoint Set Union with the ability to roll back the last operations (we will need to use rank heuristics, but not path compression heuristics). Since we need to maintain bipartiteness, for each vertex in the DSU we will store the parity of the distance to the parent. To perform rollbacks, we can store two arrays that will show for each operation which cell's value changed (the address of the modified variable) and what it was before. We will traverse the segment tree using depth-first search. Each time we enter a vertex of the segment tree, we will add all edges that exist throughout this segment to the DSU and recalculate the number of bipartite components. Upon exiting a vertex, we will roll back the DSU to the version that was present when we entered the vertex. When processing a leaf of the segment tree, we will know how many bipartite components are in the graph at that moment, and thus we will count the number of colorings modulo $3$. Complexity of the solution: $O((q+n) \log q \log n)$.
|
[
"data structures",
"divide and conquer",
"dsu",
"graphs",
"trees"
] | 2,700
|
#include <bits/stdc++.h>
using namespace std;
using pt = pair<int, int>;
const int N = 200007;
int n, q;
int k[N];
vector<pt> t[4 * N];
int p[N], e[N], rk[N];
int *pos[3 * N];
int val[3 * N];
int csz;
int ans[N];
void upd(int v, int l, int r, int L, int R, pt val) {
if (L >= R) return;
if (l == L && r == R) {
t[v].push_back(val);
return;
}
int m = (l + r) / 2;
upd(v * 2 + 1, l, m, L, min(R, m), val);
upd(v * 2 + 2, m, r, max(m, L), R, val);
}
void rollback(int tsz) {
while (csz > tsz) {
--csz;
(*pos[csz]) = val[csz];
}
}
pt get(int v) {
if (p[v] == v) return {v, 0};
auto [u, d] = get(p[v]);
return {u, d ^ e[v]};
}
void assign(int& x, int y) {
pos[csz] = &x;
val[csz] = x;
++csz;
x = y;
}
pt unite(int x, int y) {
auto [v, d1] = get(x);
auto [u, d2] = get(y);
if (v == u) return {0, d1 ^ d2};
if (rk[v] > rk[u]) swap(v, u);
assign(p[v], u);
assign(e[v], d1 ^ d2 ^ 1);
assign(rk[u], rk[v] + rk[u]);
return {1, 0};
}
void solve(int v, int l, int r, int cnt) {
int tsz = csz;
for (auto [x, y] : t[v]) {
auto [f, d] = unite(x, y);
//cerr << l << " " << r << " " << x + 1 << " " << y + 1 << " " << f << " " << d << endl;
if (!f) cnt ^= d;
}
if (l != r - 1) {
int m = (l + r) / 2;
solve(v * 2 + 1, l, m, cnt);
solve(v * 2 + 2, m, r, cnt);
} else {
ans[l] = k[l] % 3;
if (ans[l] == 2) ans[l] = cnt + 1;
}
rollback(tsz);
}
int main() {
cin >> n >> q;
vector<int> g(n), lst(n);
for (int i = 0; i < n; ++i) {
cin >> g[i];
--g[i];
}
for (int i = 0; i < q; ++i) {
int x, y;
cin >> x >> y >> k[i];
--x; --y;
upd(0, 0, q, lst[x], i, {x, g[x]});
g[x] = y;
lst[x] = i;
}
for (int i = 0; i < n; ++i) {
upd(0, 0, q, lst[i], q, {i, g[i]});
p[i] = i;
rk[i] = 1;
}
solve(0, 0, q, n & 1);
for (int i = 0; i < q; ++i) {
cout << ans[i] << '\n';
}
}
|
2106
|
A
|
Dr. TC
|
In order to test his patients' intelligence, Dr. TC created the following test.
First, he creates a binary string$^{\text{∗}}$ $s$ having $n$ characters. Then, he creates $n$ binary strings $a_1, a_2, \ldots, a_n$. It is known that $a_i$ is created by first copying $s$, then flipping the $i$'th character ($1$ becomes $0$ and vice versa). After creating all $n$ strings, he arranges them into a grid where the $i$'th row is $a_i$.
For example,
- If $s = 101$, $a = [001, 111, 100]$.
- If $s = 0000$, $a = [1000, 0100, 0010, 0001]$.
The patient needs to count the number of $1$s written on the board in less than a second. Can you pass the test?
\begin{footnotesize}
$^{\text{∗}}$A binary string is a string that only consists of characters $1$ and $0$.
\end{footnotesize}
|
Every character of $a$ is flipped in exactly one string. Therefore, if $a_i = 1$, we need to add $n-1$ to the answer (since the $i$-$th$ character will stay $1$ for $n-1$ strings), and if $a_i = 0$ we need to add $1$ (since the $i$-$th$ character will be $1$ in exactly one string).
|
[
"brute force",
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n; cin >> n;
string s; cin >> s;
int ans = 0;
for (auto x: s) {
if (x == '0') ans++;
else ans += n - 1;
}
cout << ans << '\n';
}
int main() {
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2106
|
B
|
St. Chroma
|
Given a permutation$^{\text{∗}}$ $p$ of length $n$ that contains every integer from $0$ to $n-1$ and a strip of $n$ cells, St. Chroma will paint the $i$-th cell of the strip in the color $\operatorname{MEX}(p_1, p_2, ..., p_i)$$^{\text{†}}$.
For example, suppose $p = [1, 0, 3, 2]$. Then, St. Chroma will paint the cells of the strip in the following way: $[0, 2, 2, 4]$.
You have been given two integers $n$ and $x$. Because St. Chroma loves color $x$, construct a permutation $p$ such that the number of cells in the strip that are painted color $x$ is \textbf{maximized}.
\begin{footnotesize}
$^{\text{∗}}$A permutation of length $n$ is a sequence of $n$ elements that contains every integer from $0$ to $n-1$ exactly once. For example, $[0, 3, 1, 2]$ is a permutation, but $[1, 2, 0, 1]$ isn't since $1$ appears twice, and $[1, 3, 2]$ isn't since $0$ does not appear at all.
$^{\text{†}}$The $\operatorname{MEX}$ of a sequence is defined as the first non-negative integer that does not appear in it. For example, $\operatorname{MEX}(1, 3, 0, 2) = 4$, and $\operatorname{MEX}(3, 1, 2) = 0$.
\end{footnotesize}
|
If $x = n$ then we can output any permutation, since in order for the $\operatorname{MEX}$ to be equal to $n$, every value $0, 1, ..., n-1$ must appear. Otherwise, we want to build a permutation $p$ such that two main conditions hold. Firstly, we want $x$ to appear on the final strip as soon as possible. In other words, the first index $i$ such that $\operatorname{MEX}(p_1, p_2, ..., p_i) = x$ must be minimized. In addition, the index $j$ such that $p_j = x$ must be maximized, since the $\operatorname{MEX}$ of a set that contains $x$ cannot be equal to $x$. There are a lot of constructions that satisfy those conditions. One of them is the permutation $p = [0, 1, ..., x-1, x+1, ..., n-1, x]$.
|
[
"constructive algorithms",
"greedy",
"math"
] | 900
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n, x; cin >> n >> x;
for (int i = 0; i < x; i++) cout << i << " ";
for (int i = x+1; i < n; i++) cout << i << " ";
if (x < n) cout << x;
cout << '\n';
}
int main() {
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2106
|
C
|
Cherry Bomb
|
Two integer arrays $a$ and $b$ of size $n$ are \textbf{complementary} if there exists an integer $x$ such that $a_i + b_i = x$ over all $1 \le i \le n$. For example, the arrays $a = [2, 1, 4]$ and $b = [3, 4, 1]$ are complementary, since $a_i + b_i = 5$ over all $1 \le i \le 3$. However, the arrays $a = [1, 3]$ and $b = [2, 1]$ are not complementary.
Cow the Nerd thinks everybody is interested in math, so he gave Cherry Bomb two integer arrays $a$ and $b$. It is known that $a$ and $b$ both contain $n$ \textbf{non-negative} integers not greater than $k$.
Unfortunately, Cherry Bomb has lost some elements in $b$. Lost elements in $b$ are denoted with $-1$. Help Cherry Bomb count the number of possible arrays $b$ such that:
- $a$ and $b$ are \textbf{complementary}.
- All lost elements are replaced with non-negative integers no more than $k$.
|
Suppose that the two arrays $a$ and $b$ of size $n$ are called $s$-complementary if and only if $a_i + b_i = s$ for all $1 \le i \le n$. Observation 1: If we know every element of $a$, and we know $s$, we can uniquely determine the elements of $b$. We split the problem into two cases. The first case is if there exists at least one element in $b$ that is not missing. Then we know the required sum $s = a_i + b_i$ for every $1 \le i \le n$. For every index $i$, we know $b_i = s - a_i$. If $0 \le s - a_i \le k$ doesn't hold for some index, the answer is $0$, since it is impossible for the two arrays to be complementary while $0 \le b_i \le k$. Otherwise, the answer is $1$ from Observation 1. The second case is if every element of $b$ is missing: Observation 2: if $a$ and $b$ are $s$-complementary, then the maximum element of $b$ is positioned at the index where the minimum element of $a$ is positioned at. That is because in order for $a_i + b_i = s$ to hold when $a_i$ is minimum, $b_i$ must be maximized. Observation 3: The maximum element of $b$ must be at least $mx_a - mn_a$, i.e. the difference between the maximum and minimum elements of $a$. Suppose that the maximum of $b$ is positioned at index $i$, and $b_i < mx_a - mn_a$. From Observation 2, $a_i = mn_a$, since the position of the maximum element of $b$ is the position of the minimum element of $a$. Then $s = a_i + b_i$, and $a_i + b_i < a_i + mx_a - mn_a$ (since $b_i < mx_a - mn_a$), so $s < mn_a + mx_a - mn_a = mx_a$ (since $a_i = mn_a$). However, $s < mx_a$ cannot hold (since elements of $b$ cannot be negative), therefore this is a contradiciton. From Observation 3 and 1, we can set the maximum element of $b$ to $mx_a - mn_a, mx_a - mn_a + 1, ..., k$, which results to $k - (mx_a - mn_a) + 1$ different solutions.
|
[
"greedy",
"math",
"sortings"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n, k; cin >> n >> k;
int a[n], b[n];
for (int i = 0; i < n; i++) cin >> a[i];
for (int i = 0; i < n; i++) cin >> b[i];
int s = -1;
for (int i = 0; i < n; i++) {
if (b[i] != -1) {
if (s == -1) s = a[i] + b[i];
else {
if (s != a[i] + b[i]) {
cout << 0 << '\n';
return;
}
}
}
}
if (s == -1) {
sort(a, a+n);
int mx = a[n-1] - a[0];
cout << k - mx + 1 << '\n';
return;
}
for (int i = 0; i < n; i++) {
if (a[i] > s || s - a[i] > k) {
cout << 0 << '\n';
return;
}
}
cout << 1 << '\n';
}
int main() {
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2106
|
D
|
Flower Boy
|
Flower boy has a garden of $n$ flowers that can be represented as an integer sequence $a_1, a_2, \dots, a_n$, where $a_i$ is the beauty of the $i$-th flower from the left.
Igor wants to collect exactly $m$ flowers. To do so, he will walk the garden \textbf{from left to right} and choose whether to collect the flower at his current position. The $i$-th flower among ones he collects must have a beauty of \textbf{at least} $b_i$.
Igor noticed that it might be impossible to collect $m$ flowers that satisfy his beauty requirements, so \textbf{before} he starts collecting flowers, he can pick any integer $k$ and use his magic wand to grow a new flower with beauty $k$ and place it \textbf{anywhere} in the garden (between two flowers, before the first flower, or after the last flower). Because his magic abilities are limited, he may do this \textbf{at most once}.
Output the \textbf{minimum} integer $k$ Igor must pick when he performs the aforementioned operation to ensure that he can collect $m$ flowers. If he can collect $m$ flowers without using the operation, output $0$. If it is impossible to collect $m$ flowers despite using the operation, output $-1$.
|
There is a greedy strategy. Every time you see some flower in $a$, if its beauty is not less than the beauty of next flower you must pick from $b$, you will pick it. The answer is $0$ if we run the greedy without inserting a new flower in $a$ and still collecting all $m$ needed flowers. Now, consider what inserting a new flower of beauty $k$ in $a$ will do. It will allow you to "skip" some flower listed in $b$, as you can place it anywhere in $a$ and just pick it up when necessary. Therefore, instead of inserting a new element, we can reconsider the problem as deleting some element listed in $b$. So, one slow solution would be to try deleting $b_1$, then running the greedy on $a$, and repeat for $b_2$, $b_3, ..., b_m$. Then, we can keep the minimum $b_i$ such that the greedy was successful when deleting that flower. The solution is too slow. Instead, we can compute for every index $1 \le i \le m$ the minimum index $j$ such that if we run the greedy on the prefix $a_1, a_2, ..., a_j$, we will have collected flowers $b_1, b_2, ..., b_i$. Let this value for index $i$ of $b$ be $p_i$. Do the same for each suffix of $a$. Specifically, let $s_i$ be the maximum index $j$ such that if we run the greedy on the suffix $a_j, a_{j+1}, ..., a_n$ we would have collected the flowers $b_i, b_{i+1}, ..., b_m$. These values can be calculated with two pointers. Now, we can delete $b_j$ if $p_{j-1} < s_{j+1}$ (to delete $b_1$, it must be that $s_{2} > 0$ and to delete $b_m$ that $p_{m-1} \le n$). We keep the minimum among all deletable values in $b$. If there does not exist such a value the answer is $-1$.
|
[
"binary search",
"dp",
"greedy",
"two pointers"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
const int INF = 1e9 + 5;
int main(){
int T; cin >> T;
while(T--){
int N, M; cin >> N >> M;
vector<int> a(N), b(M);
for(int i = 0; i < N; i++) cin >> a[i];
for(int i = 0; i < M; i++) cin >> b[i];
vector<int> backwards_match(M);
int j = N - 1;
for(int i = M - 1; i >= 0; i--){
while(j >= 0 && a[j] < b[i]) j--;
backwards_match[i] = j--;
}
vector<int> forwards_match(M);
j = 0;
for(int i = 0; i < M; i++){
while(j < N && a[j] < b[i]) j++;
forwards_match[i] = j++;
}
if(forwards_match.back() < N){
cout << 0 << endl;
continue;
}
int ans = INF;
for(int i = 0; i < M; i++){
int match_previous = i == 0 ? -1 : forwards_match[i - 1];
int match_next = i + 1 == M ? N : backwards_match[i + 1];
if(match_next > match_previous){
ans = min(ans, b[i]);
}
}
cout << (ans == INF ? -1 : ans) << "\n";
}
}
|
2106
|
E
|
Wolf
|
Wolf has found $n$ sheep with tastiness values $p_1, p_2, ..., p_n$ where $p$ is a permutation$^{\text{∗}}$. Wolf wants to perform binary search on $p$ to find the sheep with tastiness of $k$, but $p$ may not necessarily be sorted. The success of binary search on the range $[l, r]$ for $k$ is represented as $f(l, r, k)$, which is defined as follows:
If $l > r$, then $f(l, r, k)$ fails. Otherwise, let $m = \lfloor\frac{l + r}{2}\rfloor$, and:
- If $p_m = k$, then $f(l, r, k)$ is \textbf{successful},
- If $p_m < k$, then $f(l, r, k) = f(m+1, r, k)$,
- If $p_m > k$, then $f(l, r, k) = f(l, m-1, k)$.
Cow the Nerd decides to help Wolf out. Cow the Nerd is given $q$ queries, each consisting of three integers $l$, $r$, and $k$. Before the search begins, Cow the Nerd may choose a non-negative integer $d$, and $d$ indices $1 \le i_1 < i_2 < \ldots < i_d \le n$ where $p_{i_j} \neq k$ over all $1 \leq j \leq d$. Then, he may re-order the elements $p_{i_1}, p_{i_2}, ..., p_{i_d}$ however he likes.
For each query, output the \textbf{minimum} integer $d$ that Cow the Nerd must choose so that $f(l, r, k)$ can be \textbf{successful}, or report that it is impossible. Note that the queries are independent and the reordering is not actually performed.
\begin{footnotesize}
$^{\text{∗}}$A permutation of length $n$ is an array that contains every integer from $1$ to $n$ exactly once.
\end{footnotesize}
|
Consider some permutation $p_1, p_2, ..., p_n$. Before answering queries, precompute the index of integer $i$ $(1 \le i \le n)$, suppose it is represented as $idx_i$. Let's see how to answer some query $l, r, x$. Obviously, if $idx_x$ does not belong in the range $[l, r]$, the answer is $-1$. Otherwise, we want to somehow manipulate the elements so the binary search ends up checking $idx_x$. Suppose the binary search is currently working on some range $[a, b]$. Let $m = \lfloor\frac{a + b}{2}\rfloor$. If $p_m = x$, we are done. Otherwise, consider the following cases: $p_m < x$ and $m < idx_x$: the binary search will continue on $[m+1, b]$, and since $idx_x$ belongs in that range, we simply let it continue. We must not swap this value. $p_m < x$ and $m > idx_x$: the binary search will continue on $[m+1, b]$, but $idx_x$ does not belong in that range. To fix this, we must replace $p_m$ with a value that is greater than $x$, in which case the search will continue on $[a, m-1]$. $p_m > x$ and $m < idx_x$: the binary search will continue on $[a, m-1]$, but $idx_x$ does not belong in that range. To fix this, we must replace $p_m$ with a value that is less than $x$, in which case the search will continue on $[m+1, b]$. $p_m > x$ and $m > idx_x$: the binary search will continue on $[a, m-1]$, and since $idx_x$ belongs in that range, we simply let it continue. We must not swap this value. Keep doing this process until $p_m = x$. Suppose that the amount of integers less than $x$ needed are $s$, and the amount of integers greater than $x$ needed are $b$. In addition, let the values smaller than $x$ that we can't swap (because they already lead to the wanted direction) be denoted by $ss$ and those bigger as $bb$. There are $n - x$ greater values available in $p$, and $x-1$ smaller values available. Therefore, if $b > n - x - bb$ or $s > x - 1 - ss$, the answer is $-1$. If the answer is not $-1$, we swap as many values as possible from $b$ and $s$ (which will be $\min{(b, s)} * 2$) and those that are still left after that (which will be $(\max{(b, s)} - \min{(b, s)}) * 2)$. The final answer is $(\max{(b, s)} - \min{(b, s)}) * 2 + \min{(b, s)} * 2$.
|
[
"binary search",
"greedy",
"math"
] | 1,800
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
int n, q; cin >> n >> q;
int arr[n+1];
vector<int> idx(n+1, 0);
for (int i = 1; i <= n; i++) {
cin >> arr[i];
idx[arr[i]] = i;
}
while (q--) {
int l, r, k; cin >> l >> r >> k;
if (idx[k] > r || idx[k] < l) {
cout << -1 << " ";
continue;
}
int big = n - k, small = k - 1;
int needBig = 0, needSmall = 0;
int bigAv = n - k, smallAv = k - 1;
int lo = l, hi = r;
while (lo <= hi) {
int mid = (lo + hi) / 2;
if (arr[mid] == k) break;
if (mid < idx[k]) { // i want to go right
if (k < arr[mid]) {
needSmall++;
}
else smallAv--;
small--;
lo = mid+1;
}
else { // i want to go left
if (k > arr[mid]) {
needBig++;
}
else bigAv--;
big--;
hi = mid-1;
}
}
if (big < 0 || small < 0) {
cout << -1 << " ";
continue;
}
int ans = 2 * min(needBig, needSmall);
int diff = abs(needBig - needSmall);
if (needBig > needSmall) {
if (bigAv < diff) cout << -1 << " ";
else cout << ans + 2 * diff << " ";
}
else {
if (smallAv < diff) cout << -1 << " ";
else cout << ans + 2 * diff << " ";
}
}
}
signed main() {
int t; cin >> t;
while (t--){
solve();
cout << "\n";
}
return 0;
}
|
2106
|
F
|
Goblin
|
Dr. TC has a new patient called Goblin. He wants to test Goblin's intelligence, but he has gotten bored of his standard test. So, he decided to make it a bit harder.
First, he creates a binary string$^{\text{∗}}$ $s$ having $n$ characters. Then, he creates $n$ binary strings $a_1, a_2, \ldots, a_n$. It is known that $a_i$ is created by first copying $s$, then flipping the $i$-th character ($1$ becomes $0$ and vice versa). After creating all $n$ strings, he arranges them into an $n \times n$ grid $g$ where $g_{i, j} = a_{i_j}$.
A set $S$ of size $k$ containing distinct integer pairs $\{(x_1, y_1), (x_2, y_2), \ldots, (x_k, y_k)\}$ is considered good if:
- $1 \leq x_i, y_i \leq n$ for all $1 \leq i \leq k$.
- $g_{x_i, y_i} = 0$ for all $1 \leq i \leq k$.
- For any two integers $i$ and $j$ ($1 \leq i, j \leq k$), coordinate $(x_i, y_i)$ is reachable from coordinate $(x_j, y_j)$ by traveling through a sequence of adjacent cells (which share a side) that all have a value of $0$.
Goblin's task is to find the maximum possible size of a good set $S$. Because Dr. TC is generous, this time he gave him two seconds to find the answer instead of one. Goblin is not known for his honesty, so he has asked you to help him cheat.
\begin{footnotesize}
$^{\text{∗}}$A binary string is a string that only consists of characters $1$ and $0$.
\end{footnotesize}
|
Each column can be broken down to $3$ components: If $a_i = 0$, the top-most component will contain $i-1$ zeros, the second component a single one, and the third component $n-i$ zeros. If $a_i = 1$, the top-most component will contain $i-1$ ones, the second component a single zero, and the third component $n-i$ ones. The easiest way to visualize is with DSU. For each column, create $3$ components, and for each component implicitly store the number of $0$s in it. Now, we will try to merge all adjacent components that both consist of zeros. Consider the following transitions, for every $1 < i \le n$: If $a_i = 0$ and $a_{i-1} = 0$, then the top-most component of column $i-1$ will be merged with the top-most component of column $i$. The same goes for the two bottom-most components. If $a_i = 0$ and $a_{i-1} = 1$, then the top-most component of column $i$ will be merged with the top-most component of column $i-1$. If $a_i = 1$ and $a_{i-1} = 0$, then the bottom-most component of column $i$ will be merged with the bottom-most component of column $i-1$. If $a_i = 1$ and $a_{i-1} = 1$, we merge nothing. After we finish the merging, we just take the component with the maximum number of zeros, which will be the answer to this problem. Note that you do not actually have to use DSU; we only care about the sizes and not the representatives, so we can also use simple prefix sums.
|
[
"dfs and similar",
"dp",
"dsu",
"greedy",
"math"
] | 1,900
|
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
struct DSU{
vector<int> p, sz;
vector<ll> score;
DSU(int n){
p.assign(n, 0);
sz.assign(n, 1);
score.assign(n, 0);
for (int i = 0; i < n; i++) p[i] = i;
}
int find(int u){
if (p[u] == u) return u;
p[u] = find(p[u]);
return p[u];
}
void unite(int u, int v){
u = find(u);
v = find(v);
if (u == v) return;
if (sz[u] < sz[v]) swap(u, v);
p[v] = u;
sz[u] += sz[v];
score[u] += score[v];
}
bool same(int u, int v){
return find(u) == find(v);
}
int size(int u){
u = find(u);
return sz[u];
}
};
void solve(){
int n;
cin >> n;
string s;
cin >> s;
DSU dsu(2 * n);
vector<array<int, 2>> prev;
for (int i = 0; i < n; i++){
if (s[i] == '1'){
dsu.score[2 * i] = 1;
dsu.unite(2 * i, 2 * i + 1);
for (int j = 0; j < prev.size(); j++){
int l = prev[j][0];
int r = prev[j][1];
if (i >= l && i <= r)
dsu.unite(2 * i, 2 * (i - 1) + j);
}
prev = {{i, i}};
continue;
}
vector<array<int, 2>> nxt = {{0, i - 1}, {i + 1, n - 1}};
for (int j = 0; j < nxt.size(); j++){
auto [l, r] = nxt[j];
dsu.score[2 * i + j] = r - l + 1;
}
for (int j = 0; j < prev.size(); j++){
for (int k = 0; k < nxt.size(); k++){
auto [l1, r1] = prev[j];
auto [l2, r2] = nxt[k];
if (l2 > r1 || r2 < l1)
continue;
dsu.unite(2 * i + k, 2 * (i - 1) + j);
}
}
prev = nxt;
}
ll ans = 0;
for (int i = 0; i < 2 * n; i++)
ans = max(ans, dsu.score[i]);
cout << ans << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int t;
cin >> t;
while (t--) solve();
}
|
2106
|
G1
|
Baudelaire (easy version)
|
\textbf{This is the easy version of the problem. The only difference between the two versions is that in this version, it is guaranteed that every node is adjacent to node $1$.}
This problem is interactive.
Baudelaire is very rich, so he bought a tree of size $n$ that is rooted at some arbitrary node. Additionally, every node has a value of $1$ or $-1$. \textbf{In this version, every node is adjacent to node $1$. However, please note that node $1$ is not necessarily the root.}
Cow the Nerd saw the tree and fell in love with it. However, computer science doesn't pay him enough, so he can't afford to buy it. Baudelaire decided to play a game with Cow the Nerd, and if he won, he would gift him the tree.
Cow the Nerd does not know which node is the root, and he doesn't know the values of the nodes either. However, he can ask Baudelaire queries of two types:
- $1$ $k$ $u_1$ $u_2$ $...$ $u_k$: Let $f(u)$ be the sum of the values of all nodes in the path from the root of the tree to node $u$. Cow the Nerd may choose an integer $k$ $(1 \le k \le n)$ and $k$ nodes $u_1, u_2, ..., u_k$, and he will receive the value $f(u_1) + f(u_2) + ... + f(u_k)$.
- $2$ $u$: Baudelaire will toggle the value of node $u$. Specifically, if the value of $u$ is $1$ it will become $-1$, and vice versa.
Cow the Nerd wins if he guesses the value of every node correctly (the values of the final tree, \textbf{after} performing the queries) within $n + 200$ total queries. Can you help him win?
|
For some node $u$, let $v_u$ be its value and $s_u$ be the sum of the values of all nodes on the simple path from the root of the tree to node $u$ ($s_u$ will also be referenced as "the sum of node $u$" for the purposes of this solution). Also, let $p_u$ be the parent of node $u$ (if $u$ is not the root). Suppose that we know which node is the root of the tree. Then, we can make $n$ queries to find $s_1, s_2, ..., s_n$, which is sufficient to figure out $v_1, v_2, ..., v_n$: it holds that $s_u = s_{p_u} + v_u$, therefore $v_u = s_u - s_{p_u}$. Then, we have $200$ queries left to find the actual root of the tree. Consider some node $u$, and all of its neighbor nodes $c_1, c_2, ..., c_k$. Observation 1: Toggling the value of node $u$ will change the sum of every adjacent node to $u$ that is not the parent. Observation 2: We can find the parent of node $u$ within $3\log k$ queries. Consider doing binary search to find which node does not change its sum when toggling the value of $u$. In order to check if the prefix $c_1, c_2, ..., c_m$ includes the parent of $u$, we do the following: query for $s1 = s_{c_1} + s_{c_2} + ... + s_{c_m}$, toggle the value of $u$, and query for $s2 = s_{c_1}' + s_{c_2}' + ... + s_{c_m}'$ again. Let $D = |s1 - s2|$. From Observation 1, if every node among $c_1, c_2, ..., c_m$ changed its value, then $D = 2m$, so we know that the parent does not belong in that prefix. Otherwise, there is a node that did not change its sum, so we can continue our binary search on that prefix. This takes $3$ queries performed $\log k$ times. Note that if we never find the parent, it must mean that $u$ is the root of the tree. Since the tree is a star, if we find the parent of node $1$, we will find the root. If there does not exist a parent, then $1$ is the root. The problem has been solved in $n + 3 \log(n)$ queries, which comfortably fits the query limit. There are a lot of other (better) solutions for this version (for example, this solution), but this solution helps best for coming up with the idea for G2.
|
[
"binary search",
"constructive algorithms",
"divide and conquer",
"greedy",
"interactive",
"trees"
] | 2,200
|
#include <bits/stdc++.h>
using i64 = long long;
void solve() {
int n;
std::cin >> n;
std::vector<int> a(n+1), son(n-1);
for (int i = 1; i < n; ++i) {
int u, v;
std::cin >> u >> v;
son[i-1] = i + 1;
}
int Q = 0;
auto ask = [&](int op, const std::vector<int> &v) {
if (++Q > n + 200) {
assert(false);
}
std::cout << "? " << op;
if (op == 1) {
std::cout << ' ' << v.size();
for (auto u: v) std::cout << ' ' << u;
std::cout << std::endl;
int res = 0;
std::cin >> res;
return res;
} else {
std::cout << ' ' << v[0] << std::endl;
return 0;
}
};
int rt = 0;
int bef = ask(1, son);
ask(2, std::vector<int>{1});
int af = ask(1, son);
if (std::abs(bef - af) == 2 * son.size()) {
rt = 1;
} else {
int l = 0, r = son.size() - 1;
while (l < r) {
int mid = l + (r - l) / 2;
std::vector<int> query(son.begin() + l, son.begin() + mid + 1);
int bef = ask(1, query);
ask(2, std::vector<int>{1});
int af = ask(1, query);
if (std::abs(bef - af) != 2 * (mid - l + 1)) {
r = mid;
} else {
l = mid + 1;
}
}
rt = son[l];
}
for (int i = 1; i <= n; ++i) {
a[i] = ask(1, std::vector<int>{i});
}
for (int i = 2; i <= n; ++i) {
if (rt != i) {
a[i] -= a[1];
}
}
if (rt > 1) a[1] -= a[rt];
std::cout << '!';
for (int i = 1; i <= n; ++i) {
std::cout << ' ' << a[i];
}
std::cout << std::endl;
}
int main() {
std::cin.tie(nullptr)->sync_with_stdio(false);
int T = 1;
std::cin >> T;
while (T--) solve();
return 0;
}
|
2106
|
G2
|
Baudelaire (hard version)
|
\textbf{This is the Hard Version of the problem. The only difference between the two versions is that in the Hard Version the tree may be of any shape.}
This problem is interactive.
Baudelaire is very rich, so he bought a tree of size $n$, rooted at some arbitrary node. Additionally, every node has a value of $1$ or $-1$.
Cow the Nerd saw the tree and fell in love with it. However, computer science doesn't pay him enough, so he can't afford to buy it. Baudelaire decided to play a game with Cow the Nerd, and if he won, he would gift him the tree.
Cow the Nerd does not know which node is the root, and he doesn't know the values of the nodes either. However, he can ask Baudelaire queries of two types:
- $1$ $k$ $u_1$ $u_2$ $...$ $u_k$: Let $f(u)$ be the sum of the values of all nodes in the path from the root of the tree to node $u$. Cow the Nerd may choose an integer $k$ $(1 \le k \le n)$ and $k$ nodes $u_1, u_2, ..., u_k$, and he will receive the value $f(u_1) + f(u_2) + ... + f(u_k)$.
- $2$ $u$: Baudelaire will toggle the value of node $u$. Specifically, if the value of $u$ is $1$, it will become $-1$, and vice versa.
Cow the Nerd wins if he guesses the value of every node correctly (the values of the final tree, \textbf{after} performing the queries) within $n + 200$ total queries. Can you help him win?
|
Read the solution to the Easy Version first. Obviously, if we know the parent of some node, we know which nodes are under that node's subtree, so we never have to consider them as candidates for the root of the tree. Let's use this to choose nodes in a way such that each time we eliminate as many candidates as possible, even in the worst case. Specifically, consider querying for the centroid of the tree (a centroid of a tree is a node that if it is deleted, every connected component left has size no more than half of the original tree). When querying for the centroid, there are two cases: either it is the root of the tree, in which case we are finished, or we find its parent, which eliminates at least half of the current candidates. Then, we delete all nodes that could not possibly be the root, and query for the new centroid. Since the candidates get halved each time, we repeat the above process no more than $\log n$ times. In the absolute worst case (which is not realistic), we make $3\log (n) + 3\log (\frac{n}{2}) + 3\log (\frac{n}{4}) + ... + 3$ queries. For $n = 1000$, this is about $165$ queries. I do want to stress that this case is impossible; the $200$ queries are pretty loose but I want to allow a lot of different solutions (there are some even better ones than the one I described here, feel free to describe better solutions in the comments).
|
[
"binary search",
"dfs and similar",
"divide and conquer",
"implementation",
"interactive",
"trees"
] | 2,500
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
vector<vector<int>> tree, cd;
vector<int> sub, val;
vector<bool> del;
int n, cdRoot = -1;
int ask(vector<int> a) {
int k = a.size();
cout << "? 1 " << k << " ";
for (auto x: a) {
cout << x << " ";
}
cout << endl;
int ans; cin >> ans;
return ans;
}
void toggle(int u) {
cout << "? 2 " << u << endl;
}
void answer(vector<int> ans) {
cout << "! ";
for (int i = 1; i <= n; i++) cout << ans[i] << " ";
cout << endl;
}
void calc_sums(int node, int par = 0) {
sub[node] = 0;
if (del[node]) return;
for (auto next: tree[node]) {
if (next == par) continue;
calc_sums(next, node);
sub[node] += sub[next];
}
sub[node]++;
}
int find_centroid(int node, int sz, int par = 0) {
for (auto next: tree[node]) {
if (next == par || del[next]) continue;
if (sub[next] * 2 >= sz) return find_centroid(next, sz, node);
}
return node;
}
void build(int node, int prev) {
cd[prev].push_back(node);
del[node] = true;
for (auto next: tree[node]) {
if (del[next]) continue;
calc_sums(next, node);
int cent = find_centroid(next, sub[next], node);
build(cent, node);
}
}
bool inspect(vector<int> a, int tog) {
int b4 = ask(a);
toggle(tog);
int aft = ask(a);
int k = a.size();
int must = b4;
if (aft < b4) {
must -= 2 * k;
}
else must += 2 * k;
if (aft != must) return true;
else return false;
}
int find_sus(vector<int> a, int tog) {
int k = a.size();
int lo = 0, hi = k-1;
int ans = -1;
while (lo <= hi) {
int mid = (lo + hi) / 2;
vector<int> qr;
for (int j = 0; j <= mid; j++) {
qr.push_back(a[j]);
}
if (inspect(qr, tog)) {
ans = mid;
hi = mid-1;
}
else lo = mid+1;
}
return ans;
}
int find_root(int node, int par = 0) {
if (cd[node].empty()) return node;
vector<int> ch;
for (auto next: cd[node]) {
if (next == par) continue;
ch.push_back(next);
}
int sus = find_sus(ch, node);
if (sus == -1) return node;
return find_root(ch[sus], node);
}
void find_vals(int node, int par = 0, int above = 0) {
val[node] -= above;
above += val[node];
for (auto next: tree[node]) {
if (next != par) {
find_vals(next, node, above);
}
}
}
void solve() {
cin >> n;
cd.assign(n+1, {});
tree.assign(n+1, {});
for (int i = 1; i < n; i++) {
int u, v; cin >> u >> v;
tree[u].push_back(v);
tree[v].push_back(u);
}
del.assign(n+1, false);
sub.assign(n+1, 0);
calc_sums(1);
cdRoot = find_centroid(1, n);
build(cdRoot, 0);
int root = find_root(cdRoot);
val.resize(n+1);
for (int i = 1; i <= n; i++) {
val[i] = ask({i});
}
find_vals(root);
answer(val);
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t; cin >> t;
while (t--) solve();
return 0;
}
|
2107
|
A
|
LRC and VIP
|
You have an array $a$ of size $n$ — $a_1, a_2, \ldots a_n$.
You need to divide the $n$ elements into $2$ sequences $B$ and $C$, satisfying the following conditions:
- Each element belongs to exactly one sequence.
- Both sequences $B$ and $C$ contain at least one element.
- $\gcd$ $(B_1, B_2, \ldots, B_{|B|}) \ne \gcd(C_1, C_2, \ldots, C_{|C|})$ $^{\text{∗}}$
\begin{footnotesize}
$^{\text{∗}}$$\gcd(x, y)$ denotes the greatest common divisor (GCD) of integers $x$ and $y$.
\end{footnotesize}
|
When all the elements of the array are equal, the solution is trivially impossible since the $\gcd$ of any subset will always be equal to $a_1$. That is infact the only $\texttt{No}$ case. We show a construction otherwise. Let $\operatorname{mx} = \max(a)$. Put all the elements equal to $\operatorname{mx}$ in one set, and all the other elements in the other set. Then, the $\gcd$ of the first set is $\operatorname{mx}$ while the other set will have a strictly smaller $\gcd$ (because $\gcd(a, b) \le \min(a, b)$) Time complexity is $O(n)$.
|
[
"greedy",
"number theory"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n; cin >> n;
vector <int> a(n);
for (int i = 0; i < n; i++){
cin >> a[i];
}
int mn = *min_element(a.begin(), a.end());
int mx = *max_element(a.begin(), a.end());
if (mn == mx){
cout << "No\n";
continue;
}
cout << "Yes\n";
for (int i = 0; i < n; i++){
cout << (1 + (a[i] == mx)) << " \n"[i + 1 == n];
}
}
return 0;
}
|
2107
|
B
|
Apples in Boxes
|
Tom and Jerry found some apples in the basement. They decided to play a game to get some apples.
There are $n$ boxes, and the $i$-th box has $a_i$ apples inside. Tom and Jerry take turns picking up apples. Tom goes first. On their turn, they have to do the following:
- Choose a box $i$ ($1 \le i \le n$) with a positive number of apples, i.e. $a_i > 0$, and pick $1$ apple from this box. Note that this reduces $a_i$ by $1$.
- If no valid box exists, the current player loses.
- If \textbf{after the move}, $\max(a_1, a_2, \ldots, a_n) - \min(a_1, a_2, \ldots, a_n) > k$ holds, then the current player (who made the last move) also loses.
If both players play optimally, predict the winner of the game.
|
Suppose $\max(a) - \min(a) \le k$ holds, and there is at least one $a_i \ge 1$. Then, infact it is possible to take one apple while still keeping the condition of $\max(a) - \min(a) \le k$. We will subtract from the maximum element. Then $\min(a)$ does not change except when $a_1 = a_2 = \ldots = a_n$. In that special case, you can check that the move is valid as $\max(a) - \min(a)$ becomes $1$. In any other case, $\max(a)$ reduces by $0$ or $1$, so $\max(a) - min(a)$ only decreases, and thus $\max(a) - \min(a) \le k$ clearly holds. Thus, for an array with $\max(a) - \min(a) \le k$, the only way for a player to lose is when all $a_i = 0$. But this happens exactly after the $\sum(a_i)$-th turn because each turn reduces $\sum(a_i)$ by $1$. It may be possible that a first move itself is not possible. For example, in the case that $a = [4, 1], k = 1$. We should check that after subtracting the maximum element, the array has the property that $\max(a) - \min(a) \le k$ and immediately print $\texttt{Jerry}$ otherwise. In the other case, we can simply print $\texttt{Tom}$ when $\sum(a_i)$ is odd, and $\texttt{Jerry}$ when $\sum(a_i)$ is even. This is because when $\sum (a_i)$ is odd, Tom will be the last person to make a move since he went first and the total number of turns is odd, and vice versa. Time complexity is $O(n)$.
|
[
"games",
"greedy",
"math"
] | 1,100
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n, k; cin >> n >> k;
vector <int> a(n);
for (auto &x : a) cin >> x;
long long sum = accumulate(a.begin(), a.end(), 0LL);
sort(a.begin(), a.end());
a[n - 1]--;
sort(a.begin(), a.end());
if (a[n - 1] - a[0] > k || sum % 2 == 0){
cout << "Jerry\n";
continue;
}
cout << "Tom\n";
}
return 0;
}
|
2107
|
C
|
Maximum Subarray Sum
|
You are given an array $a_1,a_2,\ldots,a_n$ of length $n$ and a positive integer $k$, but some parts of the array $a$ are missing. Your task is to fill the missing part so that the \textbf{maximum subarray sum}$^{\text{∗}}$ of $a$ is exactly $k$, or report that no solution exists.
Formally, you are given a binary string $s$ and a partially filled array $a$, where:
- If you remember the value of $a_i$, $s_i = 1$ to indicate that, and you are given the real value of $a_i$.
- If you don't remember the value of $a_i$, $s_i = 0$ to indicate that, and you are given $a_i = 0$.
All the values that you remember satisfy $|a_i| \le 10^6$. However, you may use values up to $10^{18}$ to fill in the values that you do not remember. It can be proven that if a solution exists, a solution also exists satisfying $|a_i| \le 10^{18}$.
\begin{footnotesize}
$^{\text{∗}}$The \textbf{maximum subarray sum} of an array $a$ of length $n$, i.e. $a_1, a_2, \ldots a_n$ is defined as $\max_{1 \le i \le j \le n} S(i, j)$ where $S(i, j) = a_i + a_{i + 1} + \ldots + a_j$.
\end{footnotesize}
|
We assume that there is at least one $s_i = 0$ (unfilled position). In the other case that all $s_i = 1$, we can easily check if the maximum subarray sum is $k$ or not. Let us first figure out when the answer is impossible. Replace all $a_i = -\texttt{INF}$ such that $s_i = 0$. If the maximum subarray sum is still $> k$, then the answer is clearly impossible. In every other case, the answer is infact possible! All positions with $s_i = 0$ will be kept $-\texttt{INF}$ except for $1$. Choose that position arbitrarily, let's call it $pos$. Let $b =$ max prefix sum in the subarray $[a_{pos + 1}, a_{pos + 2}, \ldots, a_n]$. Let $c =$ max suffix sum in the subarray $[a_1, a_2, \ldots, a_{pos - 1}]$. Here we will allow the empty prefix and suffix too. Suppose the value of $a_{pos} = x$. Then, the maximum subarray sum including $pos$ will be $x + b + c$. And the maximum subarray not including $pos$ will be $\le k$, because thats equivalent to replacing $a_{pos}$ with $-\texttt{INF}$. Thus, we can simply replace $x$ with $k - b - c$ and it will satisfy the conditions. Let $f(x)$ be defined as the maximum subarray sum when we replace $a_{pos}$ with $x$. Note the following properties of $f(x)$: $f(-\texttt{INF}) \le k$: Because this is the assumption for the non-impossible case. $f(-\texttt{INF}) \le k$: Because this is the assumption for the non-impossible case. $f(k) \ge k$: Because the subarray $[a_{pos}]$ itself has a sum of $k$, so the maximum subarray sum must be greater. $f(k) \ge k$: Because the subarray $[a_{pos}]$ itself has a sum of $k$, so the maximum subarray sum must be greater. $f(x + 1) \ge f(x)$ : Because increasing an element cannot reduce the maximum subarray sum. $f(x + 1) \ge f(x)$ : Because increasing an element cannot reduce the maximum subarray sum. $f(x + 1) \le f(x) + 1$: Because by increasing $a_{pos}$, each subarray increases by $0$ or $1$ depending on whether it includes $pos$. Thus, there is no way for the function calculating maximum subarray sum to increase by more than $1$. $f(x + 1) \le f(x) + 1$: Because by increasing $a_{pos}$, each subarray increases by $0$ or $1$ depending on whether it includes $pos$. Thus, there is no way for the function calculating maximum subarray sum to increase by more than $1$. With these observations, we can infact binary search on the first $x$ such that $f(x) = k$. This is because, $f$ becomes continuous on the set of integers by the third and fourth property, and hence it will take all values in the range $[f(a), f(b)]$ for $a \le b$, and we have shown that $k \in [f(-\texttt{INF}), f(k)]$ by the first and second property. In both solutions, be careful to avoid overflow. One simple way is to have $\texttt{INF} = 10^{13}$, as that is sufficient for our purposes (prevents overflow, but still larger than $k + \sum{a_i}$). Time complexity: $O(n)$ or $O(n \cdot \log(A))$.
|
[
"binary search",
"constructive algorithms",
"dp",
"implementation",
"math"
] | 1,500
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n;
long long k;
cin >> n >> k;
string s; cin >> s;
vector <long long> a(n);
for (auto &x : a) cin >> x;
int pos = -1;
for (int i = 0; i < n; i++){
if (s[i] == '0'){
pos = i;
a[i] = -1e13;
}
}
long long mx = 0;
long long curr = 0;
for (int i = 0; i < n; i++){
curr = max(curr + a[i], a[i]);
mx = max(mx, curr);
}
if (mx > k || (mx != k && pos == -1)){
cout << "No\n";
continue;
}
if (pos != -1){
mx = 0, curr = 0;
long long L, R;
for (int i = pos + 1; i < n; i++){
curr += a[i];
mx = max(mx, curr);
}
L = mx;
mx = 0;
curr = 0;
for (int i = pos - 1; i >= 0; i--){
curr += a[i];
mx = max(mx, curr);
}
R = mx;
a[pos] = k - L - R;
}
cout << "Yes\n";
for (int i = 0; i < n; i++){
cout << a[i] << " \n"[i + 1 == n];
}
}
return 0;
}
|
2107
|
D
|
Apple Tree Traversing
|
There is an apple tree with $n$ nodes, initially with one apple at each node. You have a paper with you, initially with nothing written on it.
You are traversing on the apple tree, by doing the following action as long as there is at least one apple left:
- Choose an \textbf{apple path} $(u,v)$. A path $(u,v)$ is called an \textbf{apple path} if and only if for every node on the path $(u,v)$, there's an apple on it.
- Let $d$ be the number of apples on the path, write down three numbers $(d,u,v)$, in this order, on the paper.
- Then remove all the apples on the path $(u,v)$.
Here, the path $(u, v)$ refers to the sequence of vertices on the unique shortest walk from $u$ to $v$.
Let the number sequence on the paper be $a$. Your task is to find the lexicographically largest possible sequence $a$.
|
Note that the problem has a simple $O(n^2)$ solution by greedily finding the largest diameter in the current forest (set of disjoint trees) at every step, tiebreaking by the lexicographic order of the endpoints, then removing the diameter and continuing this process while at least $1$ node remains. Property $1$: $2$ diameters always share at least one node. With some background theory about diameters, this becomes trivial as when diameter length (in terms of nodes) is odd, they share a common central node; and when even, they share an edge (which shares $2$ nodes). Nevertheless, we provide an elementary proof. Let $a_1, a_2, \ldots a_d$ and $b_1, b_2, \ldots b_d$ be $2$ distinct diameters that do not share any node. Consider the closest pair of $2$ points in these $2$ paths, say $a_i$ and $b_j$ are closest. Then, the path $(a_i, b_j)$ cannot contain any other $a_k$ or any other $b_k$ because otherwise it would be a contradiction to the fact that they are closest. Now, the length of the path $(a_1, b_1)$ is $(i - 1) + \operatorname{dist}(a_i, b_j) + (j - 1)$, where $\operatorname{dist}(x, y)$ denotes the number of nodes on the path $(x, y)$. Assume that $i, j \ge \frac{d + 1}{2}$, then $\operatorname{dist}(a_1, b_1)$ is strictly larger than $d$ (using $\operatorname{dist}(a_i, b_j) \ge 2$), contradicting that $d$ is the diameter. In the other cases of $i, j$ being smaller than $\frac{d + 1}{2}$, we can check the pairs $(a_1, b_d), (a_d, b_1)$ and $(a_d, b_d)$. It is easy to see at least one of them will be larger than $d$. Property $2$ : Let $(u, v)$ denote a diameter path, and $T$ denote the tree. Then $\operatorname{diam}(T \setminus (u, v)) < \operatorname{diam}(T)$ (strictly smaller). This comes directly from the previous property. Since all diameters share at least one node, the new forest generated after removing path $(u, v)$ will have a strictly smaller diameter. Property $3$ : In any sequence of positive numbers such that $a_1 + a_2 + \ldots + a_k \le n$ such that $a_i < a_j$ for all $1 \le i < j \le k$, $k$ is at most $O(\sqrt{n})$. This is a classical property. Note that $a_i \ge i$ for all $1 \le i \le k$ by induction. And thus, $a_1 + a_2 + \ldots + a_k \ge 1 + 2 + \ldots + k = \dfrac{k \cdot (k + 1)}{2}$. But, $a_1 + a_2 + \ldots + a_k \le n$ implies $\dfrac{k \cdot (k + 1)}{2} \le n$, and so $k \le 2 \cdot \sqrt{n}$. With these $3$ properties, we can now solve the problem. We describe the algorithm below: Maintain a collection of subtrees (of nodes which have apples). Initially, the whole tree is there as one component. Maintain a collection of subtrees (of nodes which have apples). Initially, the whole tree is there as one component. For every subtree, find it's lexicographically largest triplet of $(d, u, v)$ using a BFS/DFS. For every subtree, find it's lexicographically largest triplet of $(d, u, v)$ using a BFS/DFS. Remove the nodes on the diameter path for each subtree. This divides the component into several smaller subtrees. Add them to our collection. Remove the nodes on the diameter path for each subtree. This divides the component into several smaller subtrees. Add them to our collection. Repeat steps $2$ and $3$ while our collection is non-empty. Repeat steps $2$ and $3$ while our collection is non-empty. Finally, sort (in descending order) the list of $(d, u, v)$ collected throughout all the steps, and output. Finally, sort (in descending order) the list of $(d, u, v)$ collected throughout all the steps, and output. It is not hard to see that the above approach produces the correct answer. And it runs in $O(n \cdot \sqrt{n})$ time because steps $2$ and $3$ are repeated only $O(\sqrt{n})$ times. Every smaller tree is formed from a larger "parent" tree. The diameter of the smaller tree is smaller than the diameter of the larger tree using Property $2$. Suppose that there is some tree that is formed at $k$-th step. Then, consider the sequence of parent trees of this tree, and their diameters. Their diameters form a strictly increasing sequence, but their sum must be bounded by $n$ since we remove that many nodes, and total removed nodes is $n$. Thus, using Property $3$, we get $k \le \sqrt{2 \cdot n}$. There are several correct ways to implement Step $2$. One of the neatest way is as follows: Find the furthest node from $1$, tie breaking lexicographically, say we got $x$. Find the furthest node from $x$, again tie breaking lexicopgraphically, say we got $y$. Then $(\max(x, y), \min(x, y))$ is the required diameter. To prove that these $2$ traversals are enough, you can use the fact mentioned in proof of property $1$, i.e. diameters share a central node or an edge (depending on parity). To find the furthest node, we can use BFS or DFS both fairly easily. For step $3$, we can keep recursing to the parent of $y$ till we reach $x$, assuming we have calculated the parents of all vertices with a dfs from $x$. You may refer to the code. Solve the problem in $O(n \cdot \log(n))$. Let $f_i,g_i$ be the longest and second longest path from $i$ in $i$'s subtree, the diameter will be $\max \limits_{i=1}^n {f_i+g_i}$. When we find a diameter $(u,v)$, let update $f_u,g_u$ for all $u$ on the path $(\operatorname{lca}(u,v),\text{root})$. Note that the path $(\operatorname{lca}(u,v),\text{root})$ will be always shorter than the diameter, so the total update times will be $\mathcal O(n)$. Use a std::set to find $f_u,g_u$ for every node $u$ and a std::priority_queue to maintain the maximum $f_i + g_i$, the total complexity is $\mathcal O(n \log n)$. Compute provably the exact worst case, and construct a tree that obtains it. It can be proven that diameter decreases by $2$ each iteration and not just $1$, and it is possible to obtain this as we show later. So the best bound is $d$ where $1 + 3 + 5 + \ldots + (2 \cdot d - 1) \le n$, which comes out to be $\sqrt{n}$. Let $L = \sqrt{n}$. Make $P_1, P_2, \ldots, P_L$ where $P_i$ is a line graph of size $2 \cdot i - 1$. Now, connect the central nodes of $P_i$ and $P_{i - 1}$ for all $i$. Refer to Wuhudsm comment for a picture.
|
[
"brute force",
"dfs and similar",
"greedy",
"implementation",
"trees"
] | 2,100
|
#include<bits/stdc++.h>
#ifdef DEBUG_LOCAL
#include <mydebug/debug.h>
#endif
using ll = long long;
const int N = 5e5+5;
using namespace std;
using pi = pair<int,int>;
using ti = tuple<int,int,int,int>;
int T,n,u,v,del[N],fa[N],ct; vector<int> g[N]; set<pi> t[N];
void dfs(int u,int f){
t[u].emplace(0,u),fa[u] = f;
for(int v : g[u]) if(v != f){
dfs(v,u); auto [x,y] = *--t[v].end();
t[u].emplace(x+1,y);
}
}ti gt(int u){
assert(t[u].size() >= 1);
if(t[u].size() == 1) return {0,u,u,u};
auto [x,y] = *--t[u].end();
auto [p,q] = *--(--t[u].end());
return {x + p,max(y,q),min(y,q),u};
}void los(){
cin >> n;
for(int i = 1;i <= n;i ++) g[i].clear(),del[i] = 0,t[i].clear();
for(int i = 1;i < n;i ++) cin >> u >> v,g[u].push_back(v),g[v].push_back(u);
ct = 0,dfs(1,0); priority_queue<ti> q;
for(int i = 1;i <= n;i ++) q.emplace(gt(i));
while(q.size()){
auto [di,u,v,d] = q.top(); q.pop();
if(del[d] || ti{di,u,v,d} != gt(d)) continue;
cout << di + 1 << " " << u << " " << v << " ";
while(u != d) del[u] = 1,u = fa[u];
while(v != d) del[v] = 1,v = fa[v];
del[d] = 1;
auto [x,y] = *--t[d].end();
while(fa[d] && !del[fa[d]]){
d = fa[d];
if(t[d].count({++x,y})) t[d].erase({x,y});
q.emplace(gt(d));
if(fa[d]){
auto [a,b] = *--t[d].end();
t[fa[d]].emplace(a+1,b);
}
}
}cout << "\n";
}int main(){
ios::sync_with_stdio(0),cin.tie(0);
for(cin >> T;T --;) los();
}
|
2107
|
E
|
Ain and Apple Tree
|
If I was also hit by an apple falling from an apple tree, could I become as good at physics as Newton?
To be better at physics, Ain wants to build an apple tree so that she can get hit by apples on it. Her apple tree has $n$ nodes and is rooted at $1$. She defines the weight of an apple tree as $\sum \limits_{i=1}^n \sum \limits_{j=i+1}^n \text{dep}(\operatorname{lca}(i,j))$.
Here, $\text{dep}(x)$ is defined as the number of edges on the unique shortest path from node $1$ to node $x$. $\operatorname{lca}(i, j)$ is defined as the unique node $x$ with the largest value of $\text{dep}(x)$ and which is present on both the paths $(1, i)$ and $(1, j)$.
From some old books Ain reads, she knows that Newton's apple tree's weight is around $k$, but the exact value of it is lost.
As Ain's friend, you want to build an apple tree with $n$ nodes for her, and the absolute difference between your tree's weight and $k$ should be \textbf{at most $1$}, i.e. $|\text{weight} - k| \le 1$. Unfortunately, this is not always possible, in this case please report it.
|
Let $C(n, k)$ denote $\frac{n!}{k! \cdot (n - k)!}$. We first start from the maximum possible value of $k$, and the tree that achieves it, i.e. a straight line path rooted at $1$. It is not too hard to see that this tree has the largest weight, and the weight of this tree is $C(n, 3)$. Thus, for $k > C(n, 3) + 1$, we answer $\texttt{No}$. We can prove the largest weight point by proving each node should have exactly $1$ child. Suppose instead some node $u$ has $\ge 2$ children, $c_1$ and $c_2$, but then if we attached $c_1$ to $c_2$ and broke the $c_1$ and $u$ edge, it increases the weight. Now, infact the answer is possible for all $0 \le k \le C(n, 3) + 1$. We try to modify the line graph to give a valid construction. First of all, replace $k$ with $C(n, 3) - k$ and view the problem as reducing the weight of the line graph instead. Suppose we broke the edge $(n - 1) - n$ and added the edge $x - n$, then the weight decreases by $C(n - x, 2)$. This is because the depth of LCA with nodes $n - 1$, $n - 2, \ldots, x$ change and they decrease by $n - x - 1, n - x - 2, \ldots 1$ respectively. Suppose, after this step, we further broke the edge $(n - 2) - (n - 1)$, and added the edge $y - (n - 1)$. It is tempting to say that the weight again decreases by $C(n - 1 - y, 2)$ but this is only true when $y \ge x$. If we generalize the above, and break the edges $(i - 1) - i$ and add the edge $a_i - i$ ($a_i < i$), then the weight will decrease by $C(i - a_i, 2)$ as long as the sequence $a$ is decreasing (or $a_i = i - 1$, i.e. we do not change the parent). Because $a$ must be decreasing, that means that $i - a_i$ can only contain distinct elements (or $i - a_i = 1$). This reduces the problem to Find sum of distinct elements of $C(i, 2)$ ($0 \le i \le n - 1$)at most $1$ from k. Infact the greedy algorithm solves this problem, i.e. take the largest (untaken) value of $C(i, 2) \le k$ at each step. We prove by induction. For $n \le 5$, it can be verified through bruteforce for all cases. For $n \ge 6$, if $k$ is $\le C(n - 1, 3)$, just set $n = n - 1$ and by induction it is solvable. If $C(n - 1, 3) < k \le C(n, 3)$, then we set $k = k - C(n - 1, 2)$ and use induction. It can be verified that $0 \le k \le C(n - 1, 3)$ after this modification and so it is valid.
|
[
"binary search",
"constructive algorithms",
"greedy",
"math",
"trees"
] | 2,600
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n; cin >> n;
long long k; cin >> k;
long long mx = 1LL * n * (n - 1) * (n - 2) / 6;
if (k > mx + 1){
cout << "No\n";
continue;
}
cout << "Yes\n";
k = mx - min(k, mx);
int p = n - 1;
for (int i = n; i >= 2; i--){
while (1LL * p * (p - 1) / 2 > k){
p--;
}
k -= 1LL * p * (p - 1) / 2;
cout << (i - p) << " " << i << "\n";
if (p != 1) p--;
}
}
return 0;
}
|
2107
|
F1
|
Cycling (Easy Version)
|
\textbf{This is the easy version of the problem. The difference between the versions is that in this version, $1\le n\le 5\cdot 10^3$ and you don't need to output the answer for each prefix. You can hack only if you solved all versions of this problem.}
Leo works as a programmer in the city center, and his lover teaches at a high school in the suburbs. Every weekend, Leo would ride his bike to the suburbs to spend a nice weekend with his lover.
There are $n$ cyclists riding in front of Leo on this road right now. They are numbered $1$, $2$, $\ldots$, $n$ from front to back. Initially, Leo is behind the $n$-th cyclist. The $i$-th cyclist has an agility value $a_i$.
Leo wants to get ahead of the $1$-st cyclist. Leo can take the following actions as many times as he wants:
- Assuming that the first person in front of Leo is cyclist $i$, he can go in front of cyclist $i$ for a cost of $a_i$. This puts him behind cyclist $i - 1$.
- Using his super powers, swap $a_i$ and $a_j$ ($1\le i < j\le n$) for a cost of $(j - i)$.
Leo wants to know the minimum cost to get in front of the $1$-st cyclist. Here you only need to print the answer for the whole array, i.e. $[a_1, a_2, \ldots, a_n]$.
|
Let us try to figure out the optimal strategy. We want to use small values for the "overtake" operation, and we will use "swap" operations to make the overtakes less costly. Suppose $p =$ first position of minima element. We should over take cyclists $1, 2, \ldots, p$ for the cost of $a_p$, swapping $p$ till first. This is because all $a_i (1 \le i < p)$ is $\ge a_p + 1$, and so the swap + overtake can't be worse. And obviously, if we are using $a_q (q > p)$, it is less costly in terms of swaps to use $p$ since the overtake cost cannot be worse. Now, we might use the value $a_p$ for overtakes in the suffix as well, i.e. there exists some $q \ge p$ such that we overtake cyclists $1, 2, \ldots, q$ using $a_p$. This reduces the problem to the subproblem $a[q + 1, n]$, and thus we can use Dynamic Programming! But why are the problems independent in $a[q + 1, n]$ and the rest of the array? Suppose that we took some $a_i (i \le q)$ to be used for swaps in the suffix $a[q + 1, n]$, but we could then instead set up the swaps such that first $p$ reaches $q$, and then $p$ is taken for these suffix swaps. This won't add any additional swaps, and may only reduce the cost. Let $dp_i$ be the answer for the suffix $a[i, n]$. Then, we compute $p =$ position of minima in $a[i, n]$, iterate on partition index $j$ such that we will take $p$ to $j$ using it for swaps $1, 2 \ldots, j$, value computed as $dp_{j + 1} + a_p \cdot (j - i) + S$, where $S$ denotes the total swaps made ($S = 2 \cdot (j - p) + (p - i - 1)$). This solves the problem in $O(n^2)$.
|
[
"binary search",
"brute force",
"dp",
"greedy"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
int main(){
int t; cin >> t;
while (t--){
int n; cin >> n;
vector <int> a(n + 1);
for (int i = 1; i <= n; i++){
cin >> a[i];
}
vector <long long> dp(n + 1, 1e18);
dp[n] = 0;
for (int i = n - 1; i >= 0; i--){
int p = i + 1;
for (int j = i + 1; j <= n; j++) if (a[j] < a[p]){
p = j;
}
for (int j = p; j <= n; j++){
dp[i] = min(dp[i], dp[j] + 2 * (j - p) + 1LL * (j - i) * a[p] + (p - i - 1));
}
}
cout << dp[0] << "\n";
}
return 0;
}
|
2107
|
F2
|
Cycling (Hard Version)
|
\textbf{This is the hard version of the problem. The difference between the versions is that in this version, $1\le n\le 10^6$ and you need to output the answer for each prefix. You can hack only if you solved all versions of this problem.}
Leo works as a programmer in the city center, and his lover teaches at a high school in the suburbs. Every weekend, Leo would ride his bike to the suburbs to spend a nice weekend with his lover.
There are $n$ cyclists riding in front of Leo on this road right now. They are numbered $1$, $2$, $\ldots$, $n$ from front to back. Initially, Leo is behind the $n$-th cyclist. The $i$-th cyclist has an agility value $a_i$.
Leo wants to get ahead of the $1$-st cyclist. Leo can take the following actions as many times as he wants:
- Assuming that the first person in front of Leo is cyclist $i$, he can go in front of cyclist $i$ for a cost of $a_i$. This puts him behind cyclist $i - 1$.
- Using his super powers, swap $a_i$ and $a_j$ ($1\le i < j\le n$) for a cost of $(j - i)$.
Leo wants to know the minimum cost to get in front of the $1$-st cyclist.
In addition, he wants to know the answer for each $1\le i \le n$, $[a_1, a_2, \ldots, a_i]$ as the original array. The problems of different $i$ are independent. To be more specific, in the $i$-th problem, Leo starts behind the $i$-th cyclist instead of the $n$-th cyclist, and cyclists numbered $i + 1, i + 2, \ldots, n$ are not present.
|
Read the editorial for F1 first. We considered the positions that we would take $p$ to, which we assumed was $q$ and iterated $q = p, p + 1, \ldots, n$, finding the cost for each. Property : Optimal $q$ is only $q = p$ or $q = n$. Suppose $q \ne p$, i.e. we use $a_p$ for overtaking $p + 1$, and we used $a_i$ for overtaking $q + 1$. Then, if $a_p + 2 \le a_i$, it is not less optimal to overtake $q + 1$ using $a_p$ as well, and if $a_i \le a_p + 1$, it is not less optimal to overtake $q$ using $a_i$. Thus, $q$ is only ever the "edges" of the ranges , i.e. $p$ or $n$. Now, let $p_1$ be the first minima in $a[1, n]$, then $p_2$ be the first minima in $a[p_1 + 1, n]$, $p_3$ be the first minima in $a[p_2 + 1, n]$, and so on. Let $q_i =$ where $p_i$ is taken to for each $i$. Due to the above property, we can see that the optimal strategy is : for some $x$, $q_x = n$ and $q_i = p_i$ for $i < x$, i.e. choose to take some index $p_x$ to $n$, and the previous indices do not move. We get a $O(n)$ solution for a single array by trying all values of $x$. But, we need to answer prefix queries. Note that each option of $x$ gives a linear equation in the length of the array. And appending an element to the end of the array adds at most $1$ extra option for $x$. We can maintain all the linear equations in a CHT/Lichao tree and find the minimum each time. It can be proven that only $min(a) \le a_i \le min(a) + 20$ need to be considered, and we will never overtake using a larger value. This property gives us simpler solutions in $O(n \cdot 20)$ with $0$ data structures. Assume $min(a) = 1$. Let $L_i$ denote the number of overtakes using the value $i$, and we use values $1, 2 \ldots, k$. Then, it is easy to see the indices $1, 2, \ldots, L_1$ will be overtaken with $1$, after that $L_1 + 1, L_1 + 2, \ldots, L_2$ will be overtaken with $2$ and so on. It should not be more optimal to take the value $i$ to cover the entire suffix, and so this gives us additional constraints of the form (by calculating the difference of costs in the current way, and the modified way) $2 \cdot (L_{i + 1} + L_{i + 2} + \ldots + L_k) > L_{i + 1} \cdot 1 + L_{i + 2} \cdot 2 + L_{i + 3} \cdot 3 + \ldots$. This simplies to $L_{i + 1} \ge L_{i + 3} + 2 \cdot L_{i + 4} + \ldots$. Because of $L_{i + 1} \ge 2 \cdot L_{i + 4}$, we get a bound of $3 \cdot log_2(n)$, but the exact bound can be computed by a greedy approach.
|
[
"binary search",
"brute force",
"data structures",
"dp",
"greedy"
] | 2,800
|
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define INF (int)1e18
#define inf 1e18
#define ld long double
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
struct chtDynamicMin {
struct line {
int m, b; ld x;
int om, ob;
int val; bool isQuery;
line(int _m = 0, int _b = 0, int _om = 0, int _ob = 0) {
m = _m;
b = _b;
om = _om;
ob = _ob;
val = 0;
x = -inf;
isQuery = false;
}
int eval(int x) const { return m * x + b; }
bool parallel(const line &l) const { return m == l.m; }
ld intersect(const line &l) const {
return parallel(l) ? inf : 1.0 * (l.b - b) / (m - l.m);
}
bool operator < (const line &l) const {
if(l.isQuery) return x < l.val;
else return m < l.m;
}
};
set<line> hull;
typedef set<line> :: iterator iter;
bool cPrev(iter it) { return it != hull.begin(); }
bool cNext(iter it) { return it != hull.end() && next(it) != hull.end(); }
bool bad(const line &l1, const line &l2, const line &l3) {
return l1.intersect(l3) <= l1.intersect(l2);
}
bool bad(iter it) {
return cPrev(it) && cNext(it) && bad(*prev(it), *it, *next(it));
}
iter update(iter it) {
if(!cPrev(it)) return it;
ld x = it -> intersect(*prev(it));
line tmp(*it); tmp.x = x;
it = hull.erase(it);
return hull.insert(it, tmp);
}
void addLine(int m, int b) {
int om = m, ob = b;
m *= -1;
b *= -1;
line l(m, b, om, ob);
iter it = hull.lower_bound(l);
if(it != hull.end() && l.parallel(*it)) {
if(it -> b < b) it = hull.erase(it);
else return;
}
it = hull.insert(it, l);
if(bad(it)) return (void) hull.erase(it);
while(cPrev(it) && bad(prev(it))) hull.erase(prev(it));
while(cNext(it) && bad(next(it))) hull.erase(next(it));
it = update(it);
if(cPrev(it)) update(prev(it));
if(cNext(it)) update(next(it));
}
int query(int x) const {
if(hull.empty()) return inf;
line q; q.val = x, q.isQuery = 1;
iter it = --hull.lower_bound(q);
return - it -> eval(x);
}
};
void Solve()
{
int n; cin >> n;
// insert lines and calculate CHT
chtDynamicMin cht;
vector <int> a(n + 1);
for (int i = 1; i <= n; i++){
cin >> a[i];
}
vector <int> pv(n + 1, 0);
// previous smaller
stack <pair<int, int>> st;
for (int i = n; i >= 1; i--){
while (!st.empty() && st.top().first >= a[i]){
pv[st.top().second] = i;
st.pop();
}
st.push({a[i], i});
}
vector <int> dp(n + 1, 0);
for (int i = 1; i <= n; i++){
dp[i] = dp[pv[i]] + a[i] * (i - pv[i]) + (i - pv[i] - 1);
// slope is +2 + a[i]
// constant - slope * i
int m = 2 + a[i];
int c = dp[i] - m * i;
cht.addLine(m, c);
int ans = cht.query(i);
cout << ans << " \n"[i == n];
}
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
|
2108
|
A
|
Permutation Warm-Up
|
For a permutation $p$ of length $n$$^{\text{∗}}$, we define the function:
$$ f(p) = \sum_{i=1}^{n} \lvert p_i - i \rvert $$
You are given a number $n$. You need to compute how many \textbf{distinct} values the function $f(p)$ can take when considering \textbf{all possible} permutations of the numbers from $1$ to $n$.
\begin{footnotesize}
$^{\text{∗}}$A permutation of length $n$ is an array consisting of $n$ distinct integers from $1$ to $n$ in arbitrary order. For example, $[2,3,1,5,4]$ is a permutation, but $[1,2,2]$ is not a permutation ($2$ appears twice in the array), and $[1,3,4]$ is also not a permutation ($n=3$ but there is $4$ in the array).
\end{footnotesize}
|
Let's start with $p = [1, 2, 3, \dots , n]$. For this $p$, $f(p) = 0$. Let's move $n$ to the beginning one position at a time. It's easy to see that we increase the answer by $2$ at each step. Now, let's move $n - 1$ to position $2$, then $n - 2$ to position $3$, and so on, until we reach $p = [n, n - 1, \dots , 2, 1]$. For this $p$, $f(p) = \lfloor \frac{n^2}{2} \rfloor$. While doing this, we obtained every even value from $0$ to $\lfloor \frac{n^2}{2} \rfloor$, since we were adding $+2$ at each step. Let's prove that we can't get any other values. First of all, it's easy to see that we can't obtain a value less than $0$ or greater than $\lfloor \frac{n^2}{2} \rfloor$. Second, we can only obtain even values, because each swap changes the answer by an even number. So the answer is $\lfloor \frac{n^2}{4} \rfloor + 1$ Time complexity : $\text{O}(1)$.
|
[
"combinatorics",
"greedy",
"math"
] | 800
|
#include <iostream>
#include <vector>
using namespace std;
typedef long long int ll;
#define SPEEDY std::ios_base::sync_with_stdio(0); std::cin.tie(0); std::cout.tie(0);
#define forn(i, n) for (ll i = 0; i < ll(n); ++i)
void solution(){
ll n;cin>>n;
ll k=n/2;
if (n%2){cout<<k*(k+1)+1;return;}
cout<<k*k+1;
}
int main() {
SPEEDY;
int t; cin>>t;
while (t--){
solution();
cout << '\n';
} return 0;
}
|
2108
|
B
|
SUMdamental Decomposition
|
On a recent birthday, your best friend Maurice gave you a pair of numbers $n$ and $x$, and asked you to construct an array of \textbf{positive} numbers $a$ of length $n$ such that $a_1 \oplus a_2 \oplus \cdots \oplus a_n = x$ $^{\text{∗}}$.
This task seemed too simple to you, and therefore you decided to give Maurice a return gift by constructing an array among all such arrays that has the smallest sum of its elements. You immediately thought of a suitable array; however, since writing it down turned out to be too time-consuming, Maurice will have to settle for just the sum of its elements.
\begin{footnotesize}
$^{\text{∗}}$$\oplus$ denotes the bitwise XOR operation.
\end{footnotesize}
|
Let $x > 1$, and let $c$ denote the number of 1-bits in its binary representation. Clearly, when $n \le c$, it is optimal to simply distribute distinct powers of two across the elements of the array, resulting in the minimum achievable sum of $x$. If, however, $n > c$, then it is obviously beneficial to add only ones to the extra $n - c$ elements. In the case where $n - c$ is odd, we will also need to add one more one to one of the $c$ blocks with powers of two so that the $\text{XOR}$ of all ones equals $x$ $\text{mod}$ $2$. If $x = 1$, then for odd $n$ we can clearly just fill all array elements with ones. Otherwise, we need to use the pair $[2, 3]$, whose $\text{XOR}$ is $1$, resulting in the minimum score of $n + 3$. In the remaining case of $x = 0$, the situation is nearly identical to the previous one, with the exception that no valid example exists for $n = 1$ (this is the only case with an answer of $-1$). So for even $n$, the answer is simply $n$, and otherwise it's $n + 3$ (since we use the triple $1 \oplus 2 \oplus 3 = 0$). The time complexity is $\text{O}(1)$ per test.
|
[
"bitmasks",
"constructive algorithms",
"greedy",
"implementation",
"math"
] | 1,300
|
#include <bits/stdc++.h>
using namespace std;
typedef long long int ll;
void solution(){
int n,x;cin>>n>>x;
int bits=__builtin_popcountll(x);
if (n<=bits){cout<<x;return;}
if ((n-bits)%2==0)cout<<x+n-bits;
else{
if (x>1){cout<<x+n-bits+1;return;}
if (x==1){cout<<n+3;return;}
else{
if (n==1){cout<<-1;return;}
else cout<<n+3;
}
}
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int t=1;
cin>>t;
while (t--){
solution();
cout << '\n';
} return 0;
}
|
2108
|
C
|
Neo's Escape
|
Neo wants to escape from the Matrix. In front of him are $n$ buttons arranged in a row. Each button has a weight given by an integer: $a_1, a_2, \ldots, a_n$.
Neo is immobilized, but he can create and move clones. This means he can perform an unlimited number of actions of the following two types in any order:
- Create a clone in front of a specific button.
- Move an existing clone one position to the left or right.
As soon as a clone is in front of another button that has not yet been pressed—regardless of whether he was created or moved — he \textbf{immediately} presses it. If the button has already been pressed, a clone does nothing — buttons can only be pressed once.
For Neo to escape, he needs to press \textbf{all} the buttons in such an order that the sequence of their weights is \textbf{non-increasing} — that is, if $b_1, b_2, \ldots, b_n$ are the weights of the buttons in the order they are pressed, then it must hold that $b_1 \geq b_2 \geq \cdots \geq b_n$.
Your task is to determine the minimum number of clones that Neo needs to create in order to press all the buttons in a valid order.
|
Note that consecutive buttons with the same weight do not affect the result, so we will keep only one of them in such sequences. In the resulting array, we find peaks (local maxima - elements that are strictly greater than both of their neighbors). The number of such peaks is the answer, because: - Each peak is separated from others by smaller elements. Therefore, the only way to reach a peak is by creating a clone there. - If a button was visited, we can return to it. - Any element other than a peak can be reached from a larger neighbor, since we have already visited it. Thus, creating clones in all other elements is not necessary. Complexity: $\text{O}(n)$
|
[
"binary search",
"brute force",
"data structures",
"dp",
"dsu",
"graphs",
"greedy",
"implementation"
] | 1,500
|
#include <iostream>
#include <vector>
using namespace std;
int main() {
int tt = 1;
cin >> tt;
while(tt--){
int n; cin >> n;
vector<int> a;
a.push_back(-1e9);
for (int i = 0; i < n; i++){
int x; cin >> x;
if (a.back() == x);
else a.push_back(x);
}
a.push_back(-1e9);
int ans = 0;
for (int i = 1; i < a.size() - 1; i++)
if (a[i - 1] < a[i] && a[i] > a[i + 1]) ans++;
cout << ans << endl;
}
}
|
2108
|
D
|
Needle in a Numstack
|
This is an interactive problem.
You found the numbers $k$ and $n$ in the attic, but lost two arrays $A$ and $B$.
You remember that:
- $|A| + |B| = n$, the total length of the arrays is $n$.
- $|A| \geq k$ and $|B| \geq k$, the length of each array is at least $k$.
- The arrays consist only of numbers from $1$ to $k$.
- If you take any $k$ consecutive elements from array $A$, they will all be different. Also, if you take any $k$ consecutive elements from array $B$, they will all be different.
Fortunately, a kind spirit that settled in the attic found these arrays and concatenated them into an array $C$ of length $n$. That is, the elements of array $A$ were first written into array $C$, followed by the elements of array $B$.
You can ask the kind spirit up to $250$ questions. Each question contains an index $i$ ($1 \leq i \leq n$). In response, you will receive the $i$-th element of the concatenated array $C$.
You need to find the lengths of arrays $A$ and $B$, or report that it is impossible to determine them uniquely.
|
Let $k = 4$ and the following array is guessed: [ 2 4 3 1 2 4 3 1 2 1 3 2 4 1 3 2 4 1 ] For clarity, let's divide it into groups of $k$ elements: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 ] By statement, the lengths of the left and right sides are at least $k$. Let's request the left $k$ and right $k$ elements. We will mark all the elements of the left array in red, and all the elements of the right array in blue: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 ] Let's add two numbers to our array on the right (for clarity of permutations): [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 3 2 ] We got the permutations in a "normalized" form: $[2, 4, 3, 1]$ and $[4, 1, 3, 2]$. Let's take any position where the elements in the permutations differ. Let it be the second position. Note all the elements at the selected positions in the guessed array: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 3 2 ] Among them, we find boundary elements via binary search for $\log\frac{n}{k}$ queries: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 3 2 ] Next, consider the segment between these boundary elements: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 3 2 ] We are not interested in positions that match in both permutations (in our case, this is only the third position). Let's discard the elements in these positions: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 3 2 ] Using binary search, we find the boundary elements for $\log k$ queries: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 3 2 ] If there are "no-man's land" between the boundary elements, then there is no single solution and we output $-1$. In our case, there are no such elements, the problem is solved: [ 2 4 3 1 ] [ 2 4 3 1 ] [ 2 1 3 2 ] [ 4 1 3 2 ] [ 4 1 ] Total complexity: $2k + \log\frac{n}{k} + \log k$ queries.
|
[
"binary search",
"brute force",
"implementation",
"interactive"
] | 2,200
|
#include <cstdio>
#include <cstdlib>
#define MIN(X, Y) (((X) < (Y)) ? (X) : (Y))
#define MAX(X, Y) (((X) > (Y)) ? (X) : (Y))
typedef long long l;
l a[55], b[55], ui[55];
l ask(l v) {
printf("? %lld\n", v + 1);
fflush(stdout);
l t; scanf("%lld", &t);
return t;
}
void noans() {
printf("! -1\n");
fflush(stdout);
}
void ans(l a, l b) {
printf("! %lld %lld\n", a, b);
fflush(stdout);
}
void solve() {
l n, k; scanf("%lld %lld", &n, &k);
for (l i = 0; i < k; ++i) a[i] = ask(i);
for (l i = n - k; i < n; ++i) b[i % k] = ask(i);
l uc = 0;
for (l i = 0; i < k; ++i) if (a[i] != b[i]) ui[uc++] = i;
if (!uc) {
if (n == k * 2) ans(k, k);
else noans();
return;
}
l le = ui[0], ri = ui[0] + (n - 1) / k * k;
while (le + k != ri) {
l mid = le + (ri - le) / k / 2 * k;
if (ask(mid) == a[ui[0]]) le = mid;
else ri = mid;
}
l lee = 0, rii = uc;
while (lee + 1 != rii) {
l mid = (lee + rii) / 2;
if (ask(le - ui[0] + ui[mid]) == a[ui[mid]]) lee = mid;
else rii = mid;
}
l pos1 = MAX(le - ui[0] + ui[lee], k - 1);
l pos2 = MIN(le - ui[0] + ((rii == uc) ? (ui[0] + k) : ui[rii]), n - k);
if (pos1 + 1 != pos2) { noans(); return; }
ans(pos2, n - pos2);
}
int main() {
l t; scanf("%lld", &t);
while (t--) solve();
}
|
2108
|
E
|
Spruce Dispute
|
It's already a hot April outside, and Polycarp decided that this is the perfect time to finally take down the spruce tree he set up several years ago. As he spent several hours walking around it, gathering his strength, he noticed something curious: the spruce is actually a tree$^{\text{∗}}$ — and not just any tree, but one consisting of an \textbf{odd} number of vertices $n$. Moreover, on $n-1$ of the vertices hang Christmas ornaments, painted in exactly $\frac{n-1}{2}$ distinct colors, with exactly two ornaments painted in each color. The remaining vertex, as tradition dictates, holds the tree's topper.
At last, after several days of mental preparation, Polycarp began dismantling the spruce. First, he removed the topper and had already started taking apart some branches when suddenly a natural question struck him: how can he remove one of the tree's edges and rearrange the ornaments in such a way that the sum of the lengths of the simple paths between ornaments of the same color is as large as possible?
In this problem, removing an edge from the tree is defined as follows: choose a pair of adjacent vertices $a$ and $b$ ($a < b$), then remove vertex $b$ from the tree and reattach all of $b$'s adjacent vertices (except for $a$) directly to $a$.
Polycarp cannot continue dismantling his spruce until he gets an answer to this question. However, checking all possible options would take him several more years. Knowing your experience in competitive programming, he turned to you for help. But can you solve this dispute?
\begin{footnotesize}
$^{\text{∗}}$A tree is a connected graph without cycles.
\end{footnotesize}
|
Let's assume we have already removed one of the edges from the original tree. Then, we are left with a tree containing an even number of vertices, namely $n - 1$. Note that the maximum possible sum of distances between same-colored balls in this tree will be achieved if, for every edge, all vertices in its smaller subtree are of different colors. In that case, the edge will contribute the maximum possible number of times to the answer - specifically, $\min(a, b)$, where $a$ and $b$ are the sizes of the subtrees connected by this edge (since, obviously, no more paths can pass through this edge). Great - now let's root our tree at its centroid, which is a vertex that splits the tree into subtrees whose sizes do not exceed half of the total size. It can be proven that such a vertex always exists in any tree (and if there are two such vertices, we can pick either). Now, let's divide all vertices into groups based on which child subtree of the centroid they belong to, adding the centroid itself to the smallest group. Notice that the sizes of all groups still do not exceed $\frac{n - 1}{2}$. Therefore, we can pair up the vertices in such a way that each pair contains vertices from different groups. This can be done greedily using a heap in $O(n \log n)$, or by running DFS from the centroid and coloring vertices modulo $\frac{n - 1}{2}$ (clearly, with this method, no group will have two vertices of the same color), which has complexity $O(n)$. Thus, we are left to determine which edge we should remove so that the total sum of the minimal subtree sizes across all edges in the original tree decreases as little as possible. Note that, since initially the tree has an odd number of vertices, its centroid is uniquely defined and will not change no matter which edge we remove. At the same time, since all paths pass through the centroid, the total sum of their lengths can be rewritten as the sum of distances to the centroid. Now, let's make two observations: It is more beneficial to remove a leaf than any other type of vertex. Among all leaves, it is best to remove the one closest to the centroid. The second observation follows directly from the first and the earlier remark, so we only need to justify the first one. But this is fairly obvious: when we remove an edge, besides subtracting the depth of its nearest vertex, we also decrease the depths of all vertices in its subtree by one. Therefore, it will always be better to remove deeper edges rather than shallower ones. As a result, after finding the centroid in $O(n)$, we get an overall complexity of $O(n \log n)$ (if we use a heap) or $O(n)$.
|
[
"constructive algorithms",
"dfs and similar",
"graphs",
"greedy",
"implementation",
"shortest paths",
"trees"
] | 2,600
|
#include <cstdio>
#include <vector>
#define S 200005
using namespace std;
typedef long long l;
l coloring[S], centroid, best, best_dist, n, color;
vector<vector<l>> g;
l search_centroid(l u, l from) {
l sum = 0;
bool f = true;
for (l v : g[u]) if (v != from) {
l t = search_centroid(v, u);
if (t > n / 2) f = false;
sum += t;
}
if (f && n - 1 - sum <= n / 2) centroid = u;
return sum + 1;
}
void make_coloring(l u, l from, l dist) {
coloring[u] = (color++) % (n / 2) + 1;
if (g[u].size() == 1 && dist < best_dist) {
best_dist = dist;
best = u;
}
for (l v : g[u]) if (v != from)
make_coloring(v, u, dist + 1);
}
void solve() {
centroid = -1, best_dist = S, color = 0;
scanf("%lld", &n);
g.assign(n, vector<l>());
for (l i = 0; i < n - 1; ++i) {
l u, v; scanf("%lld %lld", &u, &v); --u, --v;
g[u].push_back(v); g[v].push_back(u);
}
search_centroid(1543 % n, -1);
make_coloring(centroid, -1, 0);
l bbest = max(best, g[best][0]);
coloring[centroid] = coloring[bbest];
coloring[bbest] = 0;
printf("%lld %lld\n", best + 1, g[best][0] + 1);
for (l i = 0; i < n; ++i) {
if (i) printf(" ");
printf("%lld", coloring[i]);
}
printf("\n");
}
int main() {
l tc; scanf("%lld", &tc);
while (tc--) solve();
}
|
2108
|
F
|
Fallen Towers
|
Pizano built an array $a$ of $n$ towers, each consisting of $a_i \ge 0$ blocks.
Pizano can knock down a tower so that the next $a_i$ towers grow by $1$. In other words, he can take the element $a_i$, increase the next $a_i$ elements by one, and then set $a_i$ to $0$. The blocks that fall outside the array of towers disappear. If Pizano knocks down a tower with $0$ blocks, nothing happens.
Pizano wants to knock down all $n$ towers in any order, \textbf{each exactly once}. That is, for each $i$ from $1$ to $n$, he will knock down the tower at position $i$ exactly once.
Moreover, the resulting array of tower heights \textbf{must be non-decreasing}. This means that after he knocks down all $n$ towers, for any $i < j$, the tower at position $i$ must not be taller than the tower at position $j$.
You are required to output the maximum $\text{MEX}$ of the resulting array of tower heights.
The $\text{MEX}$ of an array is the smallest non-negative integer that is not present in the array.
|
It can be shown that for any array $A$ obtained after collapsing all $n$ towers, we can also obtain any other array $B$ whose elements do not exceed the elements of $A$. A strict formal proof is provided in the spoiler below; here is a brief explanation. Suppose $k$ blocks fell onto the $i$-th tower over the entire process. Then, for its height in the final array to be $A_i$, $A_i$ blocks must have fallen after its collapse, while the remaining $k - A_i$ blocks fell before. Thus, we can always rearrange the order in which the towers collapse so that its height becomes $B_i \leq A_i$. In other words, we ensure that $B_i$ blocks fall onto the $i$-th tower after its collapse, and $k - B_i$ fall before. From this claim about the possibility of obtaining a smaller array, two facts follow: For any answer $\text{MEX} = x$, we can also obtain the answer $x - 1$. This means we can apply binary search on the answer. For any answer $\text{MEX} = x$, we can also obtain the answer $x - 1$. This means we can apply binary search on the answer. For any answer $\text{MEX} = x$, we can achieve it in the form $[0, 0, 0, 0, \ldots, 1, 2, 3, \ldots, x - 1]$. Thus, at each iteration of the binary search, we need to check the possibility of obtaining such an array. To do this, we traverse the towers from left to right, tracking the number of cubes that fall onto each tower at some point and collapsing each tower after the required number of cubes have fallen onto it. The most efficient way to track the number of cubes is using the scanline method. For any answer $\text{MEX} = x$, we can achieve it in the form $[0, 0, 0, 0, \ldots, 1, 2, 3, \ldots, x - 1]$. Thus, at each iteration of the binary search, we need to check the possibility of obtaining such an array. To do this, we traverse the towers from left to right, tracking the number of cubes that fall onto each tower at some point and collapsing each tower after the required number of cubes have fallen onto it. The most efficient way to track the number of cubes is using the scanline method. In total, there are $\text{O}(\log n)$ iterations in the binary search over the answer and $\text{O}(n)$ operations per iteration. The final complexity is $\text{O}(n \log n)$. Let the array $a$ of length $n$ be some input data for the problem. Let $r$ and $r'$ be arrays of length $n$ such that $\forall i : 0 \leq r'_i \leq r_i$. Assertion 1: Suppose there exists a permutation $p$ such that the tower with index $i$ collapses $p_i$-th in order, resulting in the array $r$. Then, there exists a permutation $p'$ such that the tower with index $i$ collapses $p'_i$-th in order, resulting in the array $r'$. $\square$ Let $s_i$ be the total number of towers that ever fell onto position $i$ when collapsing the towers in the order $p$. Note that out of these, $r_i$ towers were collapsed later than the $i$-th, and the rest were collapsed earlier. The height of the $i$-th tower at the moment of its collapse is $a_i + (s_i - r_i)$. Inductive hypothesis: There exists a permutation $p^{(k)}$ of numbers from $1$ to $k \leq n$ such that if the $i$-th tower is collapsed $p^{(k)}_i$-th in order, then: The resulting height at the $i$-th position will be $r'_i$. The resulting height at the $i$-th position will be $r'_i$. The number (denoted as $s^{(k)}_i$) of towers that fell onto position $i$ when collapsing the towers in the order $p^{(k)}$ is greater than or equal to $s_i$. The number (denoted as $s^{(k)}_i$) of towers that fell onto position $i$ when collapsing the towers in the order $p^{(k)}$ is greater than or equal to $s_i$. Base case: The permutation $p^{(1)} = [1]$. After collapsing, the tower's height becomes $0$, so $r'_1 = r_1 = 0$ in any resulting array. $s^{(1)}_1 = s_1 = 0$. Inductive step: Suppose the permutation $p^{(k - 1)}$ exists. We prove the existence of $p^{(k)}$. $\forall i \leq k - 1$, the height of the $i$-th tower at the moment of collapse under the order $p^{(k - 1)}$ is no less than under the order $p$: $(s^{(k - 1)}_i \geq s_i \land r'_i \leq r_i) ~~~ \Longrightarrow ~~~ a_i + (s^{(k - 1)}_i - r'_i) \geq a_i + (s_i - r_i)$ From this, it follows that when collapsing in the order $p^{(k - 1)}$, at least $s_k$ towers will fall onto position $k$ (let their count be $x \geq s_k \geq r'_k$). This is because the heights at the moment of collapse for all towers $i < k$ have either remained the same or increased. If $x > r'_k$, let tower $j$ be the $(x - r'_k)$-th tower among those that fell onto position $k$. Then, collapse tower $k$ immediately after tower $j$. If $x = r'_k$, collapse tower $k$ first. In both cases, after collapsing $k$, $r'_k$ towers will fall onto it, and its height will become $r'_k$. Formally, for $x > r'_k$, the permutation $p^{(k)}$ is constructed as follows: $\forall i < k : p^{(k - 1)}_i \leq p^{(k - 1)}_j \Rightarrow p^{(k)}_i = p^{(k - 1)}_i$ $\forall i < k : p^{(k - 1)}_i \leq p^{(k - 1)}_j \Rightarrow p^{(k)}_i = p^{(k - 1)}_i$ $p^{(k)}_k = p^{(k - 1)}_j + 1$ $p^{(k)}_k = p^{(k - 1)}_j + 1$ $\forall i < k : p^{(k - 1)}_i > p^{(k - 1)}_j \Rightarrow p^{(k)}_i = p^{(k - 1)}_i + 1$ $\forall i < k : p^{(k - 1)}_i > p^{(k - 1)}_j \Rightarrow p^{(k)}_i = p^{(k - 1)}_i + 1$ For $x = r'_k$: $p^{(k)}_k = 1$ $p^{(k)}_k = 1$ $\forall i < k : p^{(k)}_i = p^{(k - 1)}_i + 1$ $\forall i < k : p^{(k)}_i = p^{(k - 1)}_i + 1$ The array $s^{(k)}$ is constructed as $\forall i < k : s^{(k)}_i = s^{(k - 1)}_i$ and $s^{(k)}_k = x$. The induction is proven. Thus, we set $p' = p^{(n)}$, and the assertion is proven. $\blacksquare$ Corollary 1: If we can obtain $\text{MEX}(r) > 0$ as the answer to the problem for the resulting array $r$, then we can also obtain $\text{MEX}(r) - 1$ as the answer to the problem. $\square$ Set $r'_i = \max(0, r_i - 1)$. Then, by Assertion 1, $r'$ can be obtained as the resulting array. $\text{MEX}(r') = \text{MEX}(r) - 1$. $\blacksquare$ Corollary 2: If we can obtain $\text{MEX}(r)$ as the answer to the problem for the resulting array $r$, then we can obtain $\text{MEX}(r') = \text{MEX}(r)$ for the resulting array $r'$, where $r'_i = \max(0, (\text{MEX}(r) - 1) - (n - i))$. $\square$ By Assertion 1, $r'$ can be obtained as the resulting array. $\blacksquare$ We can perform binary search on the answer (from Corollary 1), and at each iteration, when checking the answer $x$, verify whether we can obtain the resulting array $r$ of the form $r_i = \max(0, (x - 1) - (n - i))$ (from Corollary 2).
|
[
"binary search",
"greedy"
] | 2,900
|
#include <cstdio>
#include <algorithm>
#include <cstring>
#define S 100005
typedef long long l;
l a[S], d[S], n;
bool check(l ans) {
memset(d, 0, sizeof(l) * n);
l acc = 0;
for (l i = 0; i < n; ++i) {
acc -= d[i];
l need = std::max(0LL, i - (n - ans));
if (acc < need) return false;
l end = i + a[i] + (acc++) - need + 1;
if (end < n) ++d[end];
}
return true;
}
void solve() {
scanf("%lld", &n);
for (l i = 0; i < n; ++i) scanf("%lld", &a[i]);
l le = 1, ri = n + 1, mid;
while (ri - le > 1) {
mid = (le + ri) / 2;
if (check(mid)) le = mid;
else ri = mid;
}
printf("%lld\n", le);
}
int main() {
l tc; scanf("%lld", &tc);
while (tc--) solve();
}
|
2109
|
A
|
It's Time To Duel
|
Something you may not know about Mouf is that he is a big fan of the Yu-Gi-Oh! card game. He loves to duel with anyone he meets. To gather all fans who love to play as well, he decided to organize a big Yu-Gi-Oh! tournament and invited $n$ players.
Mouf arranged the $n$ players in a line, numbered from $1$ to $n$. They then held $n - 1$ consecutive duels: for each $i$ from $1$ to $n - 1$, player $i$ faced player $i + 1$, producing one winner and one loser per match. Afterward, each player reports a value $a_i(0 \le a_i \le 1)$:
- $0$ indicating they won no duels;
- $1$ indicating they won at least one duel.
Since some may lie about their results (e.g., reporting a $1$ instead of a $0$, or vice versa) to influence prize outcomes, Mouf will cancel the tournament if he can prove any report to be false.
Given the array $a$, determine whether at least one player must be lying.
|
What can you say when every player has reported a win? A report array can be genuine only if it meets both of these conditions: There must be at least one player who reported $0$ - there are $n - 1$ duels and $n$ players. There cannot be two adjacent players who both reported $0$ - one of them must have won the duel between them. If either condition fails, the answer is YES (there is definitely a liar); otherwise, the answer is NO.
|
[
"implementation"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
if (accumulate(a.begin(), a.end(), 0) == n) {
cout << "YES" << "\n";
return;
}
for (int i = 0; i < n - 1; i++) if (!a[i] && !a[i + 1]) {
cout << "YES" << "\n";
return;
}
cout << "NO" << "\n";
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2109
|
B
|
Slice to Survive
|
Duelists Mouf and Fouad enter the arena, which is an $n \times m$ grid!
Fouad's monster starts at cell $(a, b)$, where rows are numbered $1$ to $n$ and columns $1$ to $m$.
Mouf and Fouad will keep duelling until the grid consists of only one cell.
In each turn:
- Mouf first cuts the grid along a row or column line into two parts, discarding the part without Fouad's monster. Note that the grid must have at least two cells; otherwise, the game has already ended.
- After that, in the same turn, Fouad moves his monster to any cell (possibly the same one it was in) within the remaining grid.
\begin{center}
{\small Visualization of the phases of the fourth test case.}
\end{center}
Mouf wants to minimize the number of turns, while Fouad wants to maximize them. How many turns will this epic duel last if both play optimally?
|
What changes if the turn order is reversed - starting with Fouad's move before Mouf's cut? Can the row and column dimensions be treated independently, or do they interact? How might that influence your approach? We restructure the game by grouping each move (Fouad's move) with the following cut (Mouf's cut). After Mouf performs his initial cut, each combined turn consists of Fouad first moving to any remaining cell, followed by Mouf cutting the grid. Now, let's temporarily set aside the initial cut and focus on the state of the grid just before Fouad's first move in this new structure. Suppose the remaining Grid has dimensions $n' \times m'$. Since each cut affects only one dimension, we can handle the row and column reductions independently. Thus, the number of required turns is: $f(n') + f(m')$ where $f(l)$ denotes the number of turns required to reduce a $1 \times l$ grid to a single cell. The function $f(l)$ can be defined recursively as: $f(l) = \begin{cases} 0 & l = 1, \\ 1 + \max\limits_{1 \le i \le l} \min(f(i), f(l - i + 1)) & l > 1. \end{cases}$ Given that $f$ is non-decreasing, the minimum between $f(i)$ and $f(l - i + 1)$ is maximized when $i = \left\lfloor \frac{l}{2} \right\rfloor$. Thus, we simplify the recurrence as: $f(l) = 1 + f\left(\left\lceil \frac{l}{2} \right\rceil \right)$ Now returning to the initial cut, note that it is in Mouf's best interest to minimize $n'$ and $m'$ since $f$ is non-decreasing. To do so, he should ensure Fouad ends up on the boundary of the remaining grid after the first cut. This yields four possible configurations: $(n', m') \in S = \{ (a, m),\ (n - a + 1, m),\ (n, b),\ (n, m - b + 1) \}$ The total number of turns, including the initial cut, is: $\text{ans} = 1 + \min\limits_{(n', m') \in S} \left[ f(n') + f(m') \right]$ The overall time complexity is: $O(\log(n) + \log(m))$.
|
[
"bitmasks",
"greedy",
"math"
] | 1,200
|
#include<bits/stdc++.h>
using namespace std;
void solve() {
int n, m, a, b;
cin >> n >> m >> a >> b;
vector<pair<int, int>> rec({
make_pair(a, m), make_pair(n - a + 1, m),
make_pair(n, b), make_pair(n, m - b + 1)});
int ans = n + m;
for (auto [n1, m1] : rec) {
int res = 0;
while (n1 > 1) {
++res;
n1 = (n1 + 1) / 2;
}
while (m1 > 1) {
++res;
m1 = (m1 + 1) / 2;
}
ans = min(ans, res);
}
cout << 1 + ans << "\n";
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2109
|
C1
|
Hacking Numbers (Easy Version)
|
\textbf{This is the easy version of the problem. In this version, you can send at most $\mathbf{7}$ commands. You can make hacks only if all versions of the problem are solved.}
This is an interactive problem.
Welcome, Duelists! In this interactive challenge, there is an unknown integer $x$ ($1 \le x \le 10^9$). You must make it equal to a given integer in the input $n$. By harnessing the power of "Mathmech" monsters, you can send a command to do one of the following:
\begin{center}
\begin{tabular}{|c||c||c||l||l||c|}
\hline
\textbf{Command} & \textbf{Constraint} & \textbf{Result} & \textbf{Case} & \textbf{Update} & \textbf{Jury's response} \
\hline
\hline
\multirow{2}{*}{"add $y$"} & \multirow{2}{*}{$-10^{18} \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x + y$} & $\text{if } 1 \le \mathrm{res} \le 10^{18}$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
\multirow{2}{*}{"mul $y$"} & \multirow{2}{*}{$1 \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x \cdot y$} & $\text{if } 1 \le \mathrm{res} \le 10^{18}$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
\multirow{2}{*}{"div $y$"} & \multirow{2}{*}{$1 \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x/y$} & $\text{if } y$ divides $x$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
"digit" & — & $\mathrm{res} = S(x)$$^{\text{∗}}$ & — & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\end{tabular}
\end{center}
You have to make $x$ equal to $n$ using \textbf{at most} $\mathbf{7}$ commands.
\begin{footnotesize}
$^{\text{∗}}$$S(n)$ is a function that returns the sum of all the individual digits of a non-negative integer $n$. For example, $S(123) = 1 + 2 + 3 = 6$
\end{footnotesize}
|
Figure out a way to make $x$ equal $1$, then the final command will be $\texttt{"add n-1"}$. What range of values can $x$ take after the first application of the $\texttt{"digit"}$ command? What about the second application of the $\texttt{"digit"}$ command? Avoid overusing the $\texttt{"digit"}$ command - after the second application, $x$ is limited to the range $[1, 16]$. Consider using the binary representation of $x$ instead. Apply $\texttt{"digit"}$ $\rightarrow$ $x \in [1, 81]$. Apply $\texttt{"digit"}$ $\rightarrow$ $x \in [1, 16]$. Apply $\texttt{"add -8"}$ $\rightarrow$ $x \in [1, 8]$. Apply $\texttt{"add -4"}$ $\rightarrow$ $x \in [1, 4]$. Apply $\texttt{"add -2"}$ $\rightarrow$ $x \in [1, 2]$. Apply $\texttt{"add -1"}$ $\rightarrow$ $x$ becomes exactly $1$. Finally, apply $\texttt{"add n-1"}$.
|
[
"bitmasks",
"constructive algorithms",
"interactive",
"math",
"number theory"
] | 1,500
|
#include<bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
cout << "digit" << endl;
int x;
cin >> x;
cout << "digit" << endl;
cin >> x;
for (int i = 8; i >= 1; i /= 2) {
cout << "add " << -i << endl;
cin >> x;
}
cout << "add " << n - 1 << endl;
cin >> x;
cout << "!" << endl;
cin >> x;
assert(x == 1);
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2109
|
C2
|
Hacking Numbers (Medium Version)
|
\textbf{This is the medium version of the problem. In this version, you can send at most $\mathbf{4}$ commands. You can make hacks only if all versions of the problem are solved.}
This is an interactive problem.
Welcome, Duelists! In this interactive challenge, there is an unknown integer $x$ ($1 \le x \le 10^9$). You must make it equal to a given integer in the input $n$. By harnessing the power of "Mathmech" monsters, you can send a command to do one of the following:
\begin{center}
\begin{tabular}{|c||c||c||l||l||c|}
\hline
\textbf{Command} & \textbf{Constraint} & \textbf{Result} & \textbf{Case} & \textbf{Update} & \textbf{Jury's response} \
\hline
\hline
\multirow{2}{*}{"add $y$"} & \multirow{2}{*}{$-10^{18} \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x + y$} & $\text{if } 1 \le \mathrm{res} \le 10^{18}$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
\multirow{2}{*}{"mul $y$"} & \multirow{2}{*}{$1 \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x \cdot y$} & $\text{if } 1 \le \mathrm{res} \le 10^{18}$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
\multirow{2}{*}{"div $y$"} & \multirow{2}{*}{$1 \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x/y$} & $\text{if } y$ divides $x$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
"digit" & — & $\mathrm{res} = S(x)$$^{\text{∗}}$ & — & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\end{tabular}
\end{center}
You have to make $x$ equal to $n$ using \textbf{at most} $\mathbf{4}$ commands.
\begin{footnotesize}
$^{\text{∗}}$$S(n)$ is a function that returns the sum of all the individual digits of a non-negative integer $n$. For example, $S(123) = 1 + 2 + 3 = 6$
\end{footnotesize}
|
The first command is not $\texttt{"digit"}$. The first command is $\texttt{"mul"}$. Combining the $\texttt{"mul"}$ and $\texttt{"digit"}$ commands is powerful! Consider a number that links these two commands. Recall that well-known fact from high school: a number is divisible by $9$ if and only if the sum of its digits is divisible by $9$. Apply $\texttt{"mul 9"}$. Apply $\texttt{"digit"}$ $\rightarrow$ $x \in [9, 18, 27, 36, 45, 54, 63, 72, 81]$. Apply $\texttt{"digit"}$ $\rightarrow$ $x$ becomes exactly $9$. Finally, apply $\texttt{"add n-9"}$.
|
[
"constructive algorithms",
"interactive",
"math",
"number theory"
] | 1,700
|
#include<bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
cout << "mul " << 9 << endl;
int x;
cin >> x;
cout << "digit" << endl;
cin >> x;
cout << "digit" << endl;
cin >> x;
cout << "add " << n - 9 << endl;
cin >> x;
cout << "!" << endl;
cin >> x;
assert(x == 1);
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2109
|
C3
|
Hacking Numbers (Hard Version)
|
\textbf{This is the hard version of the problem. In this version, the limit of commands you can send is described in the statement. You can make hacks only if all versions of the problem are solved.}
This is an interactive problem.
Welcome, Duelists! In this interactive challenge, there is an unknown integer $x$ ($1 \le x \le 10^9$). You must make it equal to a given integer in the input $n$. By harnessing the power of "Mathmech" monsters, you can send a command to do one of the following:
\begin{center}
\begin{tabular}{|c||c||c||l||l||c|}
\hline
\textbf{Command} & \textbf{Constraint} & \textbf{Result} & \textbf{Case} & \textbf{Update} & \textbf{Jury's response} \
\hline
\hline
\multirow{2}{*}{"add $y$"} & \multirow{2}{*}{$-10^{18} \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x + y$} & $\text{if } 1 \le \mathrm{res} \le 10^{18}$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
\multirow{2}{*}{"mul $y$"} & \multirow{2}{*}{$1 \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x \cdot y$} & $\text{if } 1 \le \mathrm{res} \le 10^{18}$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
\multirow{2}{*}{"div $y$"} & \multirow{2}{*}{$1 \le y \le 10^{18}$} & \multirow{2}{*}{$\mathrm{res} = x/y$} & $\text{if } y$ divides $x$ & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\hline
& & & $\mathrm{else}$ & $x \leftarrow x$ & "0" \
\hline
\hline
"digit" & — & $\mathrm{res} = S(x)$$^{\text{∗}}$ & — & $x \leftarrow \mathrm{res}$ & "1" \
\hline
\end{tabular}
\end{center}
Let $f(n)$ be the minimum integer such that there is a sequence of $f(n)$ commands that transforms $x$ into $n$ for all $x(1 \le x \le 10^9)$. You do not know the value of $x$ in advance. Find $f(n)$ such that, no matter what $x$ is, you can always transform it into $n$ using at most $f(n)$ commands.
Your task is to change $x$ into $n$ using \textbf{at most} $f(n)$ commands.
\begin{footnotesize}
$^{\text{∗}}$$S(n)$ is a function that returns the sum of all the individual digits of a non-negative integer $n$. For example, $S(123) = 1 + 2 + 3 = 6$
\end{footnotesize}
|
While $9$ connects the $\texttt{"mul"}$ and $\texttt{"digit"}$ commands, there may be other numbers that do so as well. Another valid solution for the medium version is as follows: $\texttt{"digit"}$. $\texttt{"mul 99"}$. $\texttt{"digit"}$. $\texttt{"add n-18"}$. But what's the reasoning behind this sequence? $S[(10^d - 1)x] = 9d \quad \forall x \in [1,10^d]$. Let's prove that $S[(10^d - 1)x] = 9d \quad \forall x \in [1, 10^d].$ Observe that $x \cdot (10^d-1) = x \cdot 10^d - x = (x - 1) \cdot 10^d + 10^d - x = (x - 1) \cdot 10^d + [(10^d - 1) - (x - 1)].$ The first term represents $(x - 1)$ shifted $d$ places to the left (i.e. multiplies by $10^d$). The second term is the $d$-digit string of nines minus $(x - 1)$, namely $\underbrace{999 \ldots 999}_{d \text{ times}} - (x - 1).$ Therefore, for every $i$ ($1 \le i \le d$), the $i$-th digit of the second term and the $(i + d)$-th digit of the first term complement each other to sum to $9$. Drumroll, please! The number we're about to use is $999\,999\,999$. With just three commands, we can turn any $x$ into $n$: Apply $\texttt{"mul } 999\,999\,999 \texttt{"}$. Apply $\texttt{"digit"}$ $\rightarrow$ $x$ becomes exactly $81$. Finally, apply $\texttt{"add n-81"}$. There's one special case: if $n = 81$, you only need the first two commands - meaning $f(81) = 2$. Now let's see why no other $n$ can get away with just two commands. First, note that the digit-sum function $S(i)$ always changes when you add $1$, so $S(i) \neq S(i + 1)$. Consider your choice of first command: $\texttt{"add y"}$: When $y$ is positive, it shifts the entire range of possible $x$-values. Otherwise, it shrinks the range but never cuts it down below half its size. In either case, it can't force $x$ into a single value. $\texttt{"div y"}$: It fails even on small $x$-values (e.g. $1 \le x \le 4$) - it doesn't force a single outcome. $\texttt{"digit"}$: After the first application, it only restricts $x$ to $[1, 81]$, and a second application only narrows it to $[1, 16]$. You still don't get a single fixed value. That leaves only one candidate to start with, which is $\texttt{"mul y"}$. Suppose some other multiplier $y \neq 999\,999\,999$ made $S(x \cdot y)$ constant. Test it against $x = 1$ and $x = 999\,999\,999$: $S(1 \cdot y) \le 80 < 81 = S(999\,999\,999\cdot y)$ They can't both be the same, so no such $y \neq 999\,999\,999$ exists. Similar to the above reasoning, keeping in mind that $S(i) \neq S(i + 1)$, the only possible second command is $\texttt{"digit"}$ (as this is the only way to shrink the range, since no other command can do so). Hence, only $n = 81$ can reach a fixed result in two commands; every other $n$ requires three.
|
[
"constructive algorithms",
"interactive",
"math",
"number theory"
] | 2,600
|
#include<bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
cout << "mul 999999999" << endl;
int x;
cin >> x;
cout << "digit" << endl;
cin >> x;
if (n != 81) {
cout << "add " << n - 81 << endl;
cin >> x;
}
cout << "!" << endl;
cin >> x;
assert(x == 1);
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2109
|
D
|
D/D/D
|
Of course, a problem with the letter D is sponsored by Declan Akaba.
You are given a simple, connected, undirected graph with $n$ vertices and $m$ edges. The graph contains no self-loops or multiple edges. You are also given a multiset $A$ consisting of $\ell$ elements: $$ A = \{A_1, A_2, \ldots, A_\ell\} $$
Starting from vertex $1$, you may perform the following move \textbf{any number} of times, as long as the multiset $A$ is not empty:
- Select an element $k \in A$ and remove it from the multiset . You must remove exactly one occurrence of $k$ from $A$.
- Traverse any walk$^{\text{∗}}$ of exactly $k$ edges to reach some vertex (possibly the same one you started from).
For each $i$ ($1 \le i \le n$), determine whether there exists a sequence of such moves that starts at vertex $1$ and ends at vertex $i$, using the original multiset $A$.
Note that the check for each vertex $i$ is independent — you restart from vertex $1$ and use the original multiset $A$ for each case.
\begin{footnotesize}
$^{\text{∗}}$A walk of length $k$ is a sequence of vertices $v_0, v_1, \ldots, v_{k - 1}, v_k$ such that each consecutive pair of vertices $(v_i, v_{i + 1})$ is connected by an edge in the graph. The sequence may include repeated vertices.
\end{footnotesize}
|
Suppose you've reached vertex $i$, what is the shortest possible sequence of moves that brings you back to $i$? At any vertex $j$ adjacent to $i$, you may take back-and-forth move $[i \to j \to i].$ Let us temporarily set aside the multiset $A$. Starting from vertex $1$, define ${}$ $\mathrm{dist}[i][p] = \min \{ \, \text{length of a walk from } 1 \text{ to } i \mid \text{length} \bmod 2 = p \, \}, \text{ for parity } p \in \{0, 1\}.$ Since any walk of length $d$ can be extended to $d + 2$ by inserting a back-and-forth move $[i \to j \to i]$ at any vertex $j$ adjacent to $i$, only the minimal even and odd distances matter. By performing a breadth-first search from vertex $1$, we derive $\mathrm{dist}$. Now reintroduce the multiset $A$ with total sum $S = \sum A$. We would like to select a sub-multiset whose sum $S'$ satisfies both $S' \ge \mathrm{dist}[i][p];$ $S' \bmod 2 = p.$ Taking all elements gives $S' = S$. Let $p = S \bmod 2$, then any vertex with $\mathrm{dist}[i][p] \le S$ is reachable by a walk of parity $p$. Otherwise, we must find a suitable $S' \bmod 2 = 1 - p$ satisfying $\mathrm{dist}[i][1 - p] \le S'$, and the minimal way to flip parity is to remove the smallest odd element $\min_{\mathrm{odd}}$ from the multiset $A$. Parity change is impossible if no such odd element exists, since removing any even element leaves the parity unchanged. Setting $S' = S - a_{\mathrm{odd}}$, any vertex with $\mathrm{dist}[i][1 - p] \le S'$ is reachable by a walk of parity $1 - p$. In conclusion, vertex $i$ is attainable by a walk using the original multiset $A$ if and only if either $\mathrm{dist}[i][p] \le S \text{ when } p = S \bmod 2 \text{, or}$ $\mathrm{dist}[i][p] \le S - \min_{\mathrm{odd}} \text{ when } p \neq S \bmod 2 \text{ and the smallest odd element } \min_{\mathrm{odd}} \in A \text{ exists.}$ The overall time complexity is: $O(n + m + \ell)$.
|
[
"dfs and similar",
"graphs",
"greedy",
"shortest paths"
] | 1,900
|
#include<bits/stdc++.h>
using namespace std;
void solve() {
int n, m, l;
cin >> n >> m >> l;
int const inf = 2e9 + 1;
int sum = 0, min_odd = inf;
vector<int> a(l);
for (int i = 0; i < l; ++i) {
cin >> a[i];
sum += a[i];
if (a[i] % 2) {
min_odd = min(min_odd, a[i]);
}
}
vector adj(n, vector<int>());
for (int i = 0; i < m; ++i) {
int u, v;
cin >> u >> v;
--u;
--v;
adj[u].push_back(v);
adj[v].push_back(u);
}
vector<array<int, 2>> dist(n, {inf, inf});
queue<pair<int, int>> q;
q.push({0, 0});
dist[0][0] = 0;
while (q.size()) {
auto [u, p] = q.front();
q.pop();
for (auto v : adj[u]) {
if (dist[v][!p] > dist[u][p] + 1) {
dist[v][!p] = dist[u][p] + 1;
q.push({v, !p});
}
}
}
for (int i = 0; i < n; ++i) {
bool found = 0;
for (int p = 0; p < 2; ++p) {
int s = sum - (p == sum % 2 ? 0 : min_odd);
if (dist[i][p] <= s) {
found = 1;
}
}
cout << found;
}
cout << "\n";
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2109
|
E
|
Binary String Wowee
|
Mouf is bored with themes, so he decided not to use any themes for this problem.
You are given a binary$^{\text{∗}}$ string $s$ of length $n$. You are to perform the following operation exactly $k$ times:
- select an index $i$ ($1 \le i \le n$) such that $s_i = \mathtt{0}$;
- then flip$^{\text{†}}$ each $s_j$ for all indices $j$ ($1 \le j \le i$).
You need to count the number of possible ways to perform all $k$ operations.
Since the answer could be ginormous, print it modulo $998\,244\,353$.
Two sequences of operations are considered different if they differ in the index selected at any step.
\begin{footnotesize}
$^{\text{∗}}$A binary string is a string that consists only of the characters $\mathtt{0}$ and $\mathtt{1}$.
$^{\text{†}}$Flipping a binary character is changing it from $\mathtt{0}$ to $\mathtt{1}$ or vice versa.
\end{footnotesize}
|
The state of $s_i$ depend entirely on how many operations have been performed on the suffix $s_i, s_{i+1}, \ldots, s_n$. Let $dp[i][j]$ denote the number of ways to perform exactly $j$ moves on the suffix $s_i, s_{i+1}, \ldots, s_n$. Some useful insight is to see that if you operate on an element, then every element after it will be unaffected; similarly, the state of an element in the string depends only on the number of operations used on the suffix after it. More specifically, an element's state flips with each operation applied to a suffix that includes it, so its value depends entirely on the number of operations done to its suffix. This inspires the following $dp$ state: ${}$ $dp[i][j] = \text{the number of ways to make exactly } j \text{ moves on the suffix only } s_i,...,s_n.$ Now, regarding transitions - the number of ways to move from state $dp[i][j]$ to $dp[i - 1][j + k]$ (in other words the number of ways to add $k$ operations at the $(i - 1)$-th index) can be thought of in terms of forming strings composed of characters $A$ and $B$, where $A$ represents making a move on $s_{i - 1}$ and $B$ represents making a move on $s_i,...,s_n$. Each valid string with $j$ $B$s and $k$ $A$s represents a unique sequence of moves, so the total number of transitions corresponds to the number of such valid strings. Notice that a string will be valid if and only if every $A$ occurs when $s_{i - 1}$ is $0$, because $s_{i - 1}$ flips every operation, meaning that every $A$ should occur on either only even indices or only odd indices depending on the initial value of $s_{i - 1}$. This means that the transition will be either $\displaystyle\binom{\left\lfloor \frac{j + k}{2} \right\rfloor}{k}$ or $\displaystyle\binom{\left\lceil \frac{j + k}{2} \right\rceil}{k}$. Finally, the answer can be found in $dp[1][k]$. The overall time complexity is: $O(n \cdot k^2)$.
|
[
"combinatorics",
"dp",
"strings"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
#define debug(x) cout << #x << " = " << x << "\n";
#define vdebug(a) cout << #a << " = "; for(auto x: a) cout << x << " "; cout << "\n";
mt19937 rng(chrono::steady_clock::now().time_since_epoch().count());
int uid(int a, int b) { return uniform_int_distribution<int>(a, b)(rng); }
ll uld(ll a, ll b) { return uniform_int_distribution<ll>(a, b)(rng); }
const int MOD = 998244353;
template<ll mod> // template was not stolen from https://codeforces.com/profile/SharpEdged
struct modnum {
static constexpr bool is_big_mod = mod > numeric_limits<int>::max();
using S = conditional_t<is_big_mod, ll, int>;
using L = conditional_t<is_big_mod, __int128, ll>;
S x;
modnum() : x(0) {}
modnum(ll _x) {
_x %= static_cast<ll>(mod);
if (_x < 0) { _x += mod; }
x = _x;
}
modnum pow(ll n) const {
modnum res = 1;
modnum cur = *this;
while (n > 0) {
if (n & 1) res *= cur;
cur *= cur;
n /= 2;
}
return res;
}
modnum inv() const { return (*this).pow(mod-2); }
modnum& operator+=(const modnum& a){
x += a.x;
if (x >= mod) x -= mod;
return *this;
}
modnum& operator-=(const modnum& a){
if (x < a.x) x += mod;
x -= a.x;
return *this;
}
modnum& operator*=(const modnum& a){
x = static_cast<L>(x) * a.x % mod;
return *this;
}
modnum& operator/=(const modnum& a){ return *this *= a.inv(); }
friend modnum operator+(const modnum& a, const modnum& b){ return modnum(a) += b; }
friend modnum operator-(const modnum& a, const modnum& b){ return modnum(a) -= b; }
friend modnum operator*(const modnum& a, const modnum& b){ return modnum(a) *= b; }
friend modnum operator/(const modnum& a, const modnum& b){ return modnum(a) /= b; }
friend bool operator==(const modnum& a, const modnum& b){ return a.x == b.x; }
friend bool operator!=(const modnum& a, const modnum& b){ return a.x != b.x; }
friend bool operator<(const modnum& a, const modnum& b){ return a.x < b.x; }
friend ostream& operator<<(ostream& os, const modnum& a){ os << a.x; return os; }
friend istream& operator>>(istream& is, modnum& a) { ll x; is >> x; a = modnum(x); return is; }
};
using mint = modnum<MOD>;
struct Combi{
vector<mint> _fac, _ifac;
int n;
Combi() {
n = 1;
_fac.assign(n + 1, 1);
_ifac.assign(n + 1, 1);
}
void check_size(int m){
int need = n;
while (need < m) need *= 2;
m = need;
if (m <= n) return;
_fac.resize(m + 1);
_ifac.resize(m + 1);
for (int i = n + 1; i <= m; i++) _fac[i] = i * _fac[i - 1];
_ifac[m] = _fac[m].inv();
for (int i = m - 1; i > n; i--) _ifac[i] = _ifac[i + 1] * (i + 1);
n = m;
}
mint fac(int m){
check_size(m);
return _fac[m];
}
mint ifac(int m){
check_size(m);
return _ifac[m];
}
mint ncr(int n, int r){
if (n < r || r < 0) return 0;
return fac(n) * ifac(n - r) * ifac(r);
}
mint npr(int n, int r){
if (n < r || r < 0) return 0;
return fac(n) * ifac(n - r);
}
} comb;
void solve(){
int n, k;
cin >> n >> k;
string s;
cin >> s;
vector<vector<mint>> dp(n, vector<mint>(k + 1));
dp[n - 1][0] = 1;
if (s[n - 1] == '0')
dp[n - 1][1] = 1;
for (int i = n - 2; i >= 0; i--){
for (int j = 0; j <= k; j++){
for (int c = 0; j + c <= k; c++){
int spaces = (j + c + (s[i] == '0')) / 2;
dp[i][j + c] += dp[i + 1][j] * comb.ncr(spaces, c);
}
}
}
cout << dp[0][k] << "\n";
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int t;
cin >> t;
while (t--) solve();
}
|
2109
|
F
|
Penguin Steps
|
Mouf, the clever master of Darkness, and Fouad, the brave champion of Light, have entered the Grid Realm once more. This time, they have found the exit, but it is guarded by fierce monsters! They must fight with their bare hands instead of summoning monsters!
Mouf and Fouad are standing on an $n \times n$ grid. Each cell $(i, j)$ has a value $a_{i,j}$ and a color. The color of a cell is white if $c_{i,j} = 0$ and black if $c_{i,j} = 1$.
Mouf starts at the top-left corner $(1, 1)$, and Fouad starts at the bottom-left corner $(n, 1)$. Both are trying to reach the exit cell at $(r, n)$.
A path is defined as a sequence of adjacent cells (sharing a horizontal or vertical edge). The cost of a path is the maximum value of $a_{i, j}$ among all cells included in the path (including the first and last cells).
Let:
- $\mathrm{dis}_M$ denote the minimum possible cost of a valid path from Mouf's starting position $(1, 1)$ to the exit $(r, n)$;
- $\mathrm{dis}_F$ denote the minimum possible cost of a valid path from Fouad's starting position $(n, 1)$ to the exit $(r, n)$.
Before moving, Mouf can perform up to $k$ operations. In each operation, he may select any black cell and increment its value by $1$ (possibly choosing the same cell multiple times).
Mouf wants to maximize $\mathrm{dis}_F$ while ensuring that his own cost $\mathrm{dis}_M$ remains \textbf{unchanged} (as if he performed no operations). If Mouf acts optimally, what are the values of $\mathrm{dis}_M$ and $\mathrm{dis}_F$?
|
Suppose you need $\mathrm{dis}_F \ge x$. To achieve this, construct a "cage" - a contiguous path of cells all with values at least $x$ - that separates Fouad from the exit, while ensuring that $\mathrm{dis}_M$ remains unchanged. Try to find a path for Mouf that impacts the building of the "cage" as little as possible. First, find $\mathrm{dis}_M$ using Dijkstra's algorithm or DSU. Define $can(x)$ to be true if you can make $\mathrm{dis}_F$ greater than or equal to $x$, and false otherwise. It's clear that if $can(x)$ is true, then $can(x-1)$ is also true, which leads us to perform a binary search on the value of $x$. Suppose we choose a set $S$ of black cells with values strictly less than $x$ and decide to apply operations on them; the optimal strategy is to make the values of all chosen cells equal to $x$. Now, suppose we don't care about $\mathrm{dis}_M$. How can we find the maximum possible $\mathrm{dis}_F$? We need to create a cage that separates Fouad from the exit. The cage consists of the borders of the grid, in addition to cells with values greater than or equal to $x$ (after applying operations), forming a connected chain such that every two consecutive cells in the chain share an edge or corner. By using multi-source Dijkstra's algorithm, start from one border and try to reach another, where the weight of each cell represents how many operations are needed to raise its value to at least $x$. If it cannot be reached (e.g., if the cell is white and its value is less than $x$), then set the weight to infinity. We have six cases to build the cage, as described in the picture below: Now we have two cases to consider: $x \le \mathrm{dis}_M$: In this scenario, $\mathrm{dis}_M$ will remain unchanged regardless of the set $S$ we choose. $x > \mathrm{dis}_M$: Here, we must ensure that at least one path for Mouf has a cost equal to $\mathrm{dis}_M$, while ensuring that no cell in this path is included in $S$. But which path should we retain? Suppose Mouf takes a path $P$. This action divides the grid into two parts: Every cell reachable from $(n,1)$ without passing through any cell in $P$. This portion belongs to Fouad (if $(n, 1)$ is already in $P$, then there are no cells that belong to Fouad's part). Every cell reachable from $(n,1)$ without passing through any cell in $P$. This portion belongs to Fouad (if $(n, 1)$ is already in $P$, then there are no cells that belong to Fouad's part). The remaining cells belong to Mouf. The remaining cells belong to Mouf. As we observe, the more cells allocated to Fouad's part, the greater the options we have for constructing a cage. Therefore, we define a super path as a path that leaves the maximum number of cells available for Fouad's part. But how many super paths exist? In fact, there is only one super path; the proof can be found below. Suppose we have two super paths that intersect at certain cells (including the first and last cells). In this case, we have two scenarios to consider: They only intersect at the first and last cells. In this situation, one of the paths cannot be a super path, leading to a contradiction. They only intersect at the first and last cells. In this situation, one of the paths cannot be a super path, leading to a contradiction. They intersect at other cells: If this is the case, then neither of the paths can be considered a super path. We can take the optimal segments from each path to construct a superior path, which also leads to a contradiction. They intersect at other cells: If this is the case, then neither of the paths can be considered a super path. We can take the optimal segments from each path to construct a superior path, which also leads to a contradiction. For further clarification, please refer to the illustration below. Now, how do we identify that super path? We begin by performing a multi-source BFS, initializing the queue with the cells in the first row and the first $r$ cells of the last column (as they always belong to Mouf's part). When visiting a cell, we mark it. If its value is less than or equal to $\mathrm{dis}_M$, we pop it from the queue and continue; otherwise, we spread to the eight neighboring cells. Through this process, we will have certainly marked the super path (and potentially some additional cells from Mouf's part). If $x > \mathrm{dis}_M$, it becomes impossible to utilize any marked cell to construct the cage - therefore, we set the weight for these cells to infinity. While you might consider the marked cells that are not part of the super path, this is inconsequential since any cage utilizing these cells will invariably pass through the super path. The overall time complexity of this approach is: $O(n^2 \cdot \log(A_{MAX} + k) \cdot \log(n))$.
|
[
"binary search",
"dfs and similar",
"flows",
"graphs",
"shortest paths"
] | 3,000
|
#include<bits/stdc++.h>
using namespace std;
using i64 = long long;
constexpr int inf = 1e9;
array<int, 8> dx{0, 0, 1, -1, 1, 1, -1, -1};
array<int, 8> dy{1, -1, 0, 0, -1, 1, -1, 1};
void solve() {
int n, r, k;
cin >> n >> r >> k;
--r;
vector a(n, vector<int>(n));
for (int i = 0; i < n; ++i) {
for (int j = 0; j < n; ++j) {
cin >> a[i][j];
}
}
vector b(n, vector<int>(n));
for (int i = 0; i < n; ++i) {
string s;
cin >> s;
for (int j = 0; j < n; ++j) {
b[i][j] = s[j] - '0';
}
}
auto in = [&](int i, int j) {
return (0 <= i && i < n && 0 <= j && j < n);
};
priority_queue<array<int, 3>, vector<array<int, 3>>, greater<>> pq;
pq.push({a[0][0], 0, 0});
vector dis(n, vector<int>(n, inf));
dis[0][0] = a[0][0];
while (pq.size()) {
auto [mx, i, j] = pq.top();
pq.pop();
if (dis[i][j] != mx) {
continue;
}
for (int dir = 0; dir < 4; ++dir) {
int ni = i + dx[dir];
int nj = j + dy[dir];
if (in(ni, nj) && dis[ni][nj] > max(mx, a[ni][nj])) {
int nmx = max(mx, a[ni][nj]);
dis[ni][nj] = nmx;
pq.push({nmx, ni, nj});
}
}
}
int mouf = dis[r][n - 1];
vector dont(n, vector<int>(n));
auto run = [&](auto &&run, int i, int j) -> void {
for (int dir = 0; dir < 8; ++dir) {
int ni = i + dx[dir];
int nj = j + dy[dir];
if (in(ni, nj) && dont[ni][nj] == 0) {
dont[ni][nj] = 1;
if (a[ni][nj] > mouf) {
run(run, ni, nj);
}
}
}
};
for (int j = 0; j < n; ++j) {
dont[0][j] = 1;
if (a[0][j] > mouf) {
run(run, 0, j);
}
}
for (int i = 0; i <= r; ++i) {
dont[i][n - 1] = 1;
if (a[i][n - 1] > mouf) {
run(run, i, n - 1);
}
}
int L = 1, R = 2e6;
while (L <= R) {
int mid = (L + R) / 2;
for (int j = 0; j < n; ++j) {
if ((mid > mouf && dont[n - 1][j]) || (mid > a[n - 1][j] && b[n - 1][j] == 0)) {
continue;
}
pq.push({max(0, mid - a[n - 1][j]), n - 1, j});
}
for (int i = r; i < n; ++i) {
if ((mid > mouf && dont[i][n - 1]) || (mid > a[i][n - 1] && b[i][n - 1] == 0)) {
continue;
}
pq.push({max(0, mid - a[i][n - 1]), i, n - 1});
}
fill(dis.begin(), dis.end(), vector<int>(n, inf));
while (pq.size()) {
auto [cost, i, j] = pq.top();
pq.pop();
dis[i][j] = min(dis[i][j], cost);
if (dis[i][j] != cost) {
continue;
}
for (int dir = 0; dir < 8; ++dir) {
int ni = i + dx[dir];
int nj = j + dy[dir];
if (in(ni, nj)) {
if ((mid > mouf && dont[ni][nj]) || (mid > a[ni][nj] && b[ni][nj] == 0)) {
continue;
}
if (dis[ni][nj] > cost + max(0, mid - a[ni][nj])) {
int ncost = cost + max(0, mid - a[ni][nj]);
dis[ni][nj] = ncost;
pq.push({ncost, ni, nj});
}
}
}
}
int minCost = inf;
if (mid <= mouf) {
for (int j = 0; j < n; ++j) {
minCost = min(minCost, dis[0][j]);
}
for (int i = 0; i <= r; ++i) {
minCost = min(minCost, dis[i][n - 1]);
}
}
for (int i = 1; i < n; ++i) {
minCost = min(minCost, dis[i][0]);
}
if (minCost <= k) {
L = mid + 1;
} else {
R = mid - 1;
}
}
cout << mouf << " " << R << "\n";
}
int main() {
ios::sync_with_stdio(0), cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2110
|
A
|
Fashionable Array
|
In 2077, everything became fashionable among robots, even arrays...
We will call an array of integers $a$ fashionable if $\min(a) + \max(a)$ is divisible by $2$ without a remainder, where $\min(a)$ — the value of the minimum element of the array $a$, and $\max(a)$ — the value of the maximum element of the array $a$.
You are given an array of integers $a_1, a_2, \ldots, a_n$. In one operation, you can remove any element from this array. Your task is to determine the minimum number of operations required to make the array $a$ fashionable.
|
Sort the array $a$. Notice that now, after any removals, the minimum element is the leftmost of the remaining elements, and the maximum is the rightmost of the remaining elements. Check if it is fashionable. If yes, then the answer is $0$. Otherwise, $a_1 + a_n$ is not divisible by $2$. Thus, we need to remove the minimum number of elements to change the parity of $\min(a)$ or $\max(a)$, which will change the parity of $\min(a) + \max(a)$, making it divisible by $2$. To do this, find the first number $a_i$ that has a different parity than $\min(a)$, and the last number $a_j$ that has a different parity than $\max(a)$. Then the answer is $\min(i - 1, n - j)$. The final asymptotic complexity is $O(n \log n)$.
|
[
"implementation",
"sortings"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> x(n);
for (int i = 0; i < n; ++i) {
cin >> x[i];
}
sort(x.begin(), x.end());
if (x[0] % 2 == x[n - 1] % 2) {
cout << 0 << endl;
return;
}
int left = n, right = n;
for (int i = 1; i < n; ++i) {
if (x[i] % 2 != x[0] % 2) {
left = i;
break;
}
}
for (int i = 1; i < n; ++i) {
if (x[n - i - 1] % 2 != x[n - 1] % 2) {
right = i;
break;
}
}
cout << min(left, right) << '\n';
}
int main() {
int t;
cin >> t;
while (t--) {
solve();
}
}
|
2110
|
B
|
Down with Brackets
|
In 2077, robots decided to get rid of balanced bracket sequences once and for all!
A bracket sequence is called balanced if it can be constructed by the following formal grammar.
- The empty sequence $\varnothing$ is balanced.
- If the bracket sequence $A$ is balanced, then $\mathtt{(}A\mathtt{)}$ is also balanced.
- If the bracket sequences $A$ and $B$ are balanced, then the concatenated sequence $A B$ is also balanced.
You are the head of the department for combating balanced bracket sequences, and your main task is to determine which brackets you can destroy and which you cannot.
You are given a balanced bracket sequence represented by a string $s$, consisting of the characters ( and ). Since the robots' capabilities are not limitless, they can remove \textbf{exactly} one opening bracket and \textbf{exactly} one closing bracket from the string.
Your task is to determine whether the robots can delete such two brackets so that the string $s$ is no longer a balanced bracket sequence.
|
Let's recall the algorithm for checking the correctness of a bracket sequence. To do this, we check that the balance $bal$ at each prefix is non-negative, and that the balance $bal$ of the entire sequence is equal to $0$. To break a correct bracket sequence, we need to violate at least one of the conditions. Note that the balance $bal$ of the entire sequence will not change when we remove one opening bracket and one closing bracket. Therefore, we need to violate the first condition. Notice that removing an opening bracket will decrease $bal$ by 1 in the suffix to the right of the bracket. Similarly, for a closing bracket, it will increase $bal$ by 1 in the suffix. It is clear that among all closing brackets, it is advantageous to remove the very last one, as this will not increase any $bal$. Among all opening brackets, it is advantageous to remove the very first one, as this will decrease all $bal$ by 1 (which is obviously the most optimal option after removal). Thus, the answer is actually the result of checking the original bracket sequence without the first and last brackets for correctness. If it is correct, then the sequence cannot be broken; otherwise, it can be.
|
[
"strings"
] | 900
|
#include "bits/stdc++.h"
#define int long long
#define all(v) (v).begin(), (v).end()
#define pb push_back
#define em emplace_back
#define mp make_pair
#define F first
#define S second
using namespace std;
template<class C>
using vec = vector<C>;
using vi = vector<int>;
using vpi = vector<pair<int, int>>;
using pii = pair<int, int>;
void solve() {
string s;
cin >> s;
int n = s.size();
int bal = 0;
for (int i = 1; i + 1 < n; i++) {
if (s[i] == '(') bal++;
else bal--;
if (bal < 0) {
cout << "YES\n";
return;
}
}
if (bal == 0) {
cout << "NO\n";
} else {
cout << "YES\n";
}
}
signed main() {
int tt;
cin >> tt;
while (tt--) {
solve();
}
}
|
2110
|
C
|
Racing
|
In 2077, a sport called hobby-droning is gaining popularity among robots.
You already have a drone, and you want to win. For this, your drone needs to fly through a course with $n$ obstacles.
The $i$-th obstacle is defined by two numbers $l_i, r_i$. Let the height of your drone at the $i$-th obstacle be $h_i$. Then the drone passes through this obstacle if $l_i \le h_i \le r_i$. Initially, the drone is on the ground, meaning $h_0 = 0$.
The flight program for the drone is represented by an array $d_1, d_2, \ldots, d_n$, where $h_{i} - h_{i-1} = d_i$, and $0 \leq d_i \leq 1$. This means that your drone either does not change height between obstacles or rises by $1$. You already have a flight program, but some $d_i$ in it are unknown and marked as $-1$. Replace the unknown $d_i$ with numbers $0$ and $1$ to create a flight program that passes through the entire obstacle course, or report that it is impossible.
|
Let's maintain the bounds $L, R$ - the segment of heights in which $h_i$ can currently be. Initially, $L = R = 0$. Now, we will iterate from $i = 1$ to $n$. We could have risen by $1$, so $R$ increases by $1$. We also cannot rise above $r_i$ or drop below $l_i$, so if $R > r_i$, then $R = r_i$, and if $L < l_i$, then $L = l_i$. If at any point $L > R$, then such an array $d$ does not exist. Otherwise, it can be restored in reverse. The final asymptotic complexity is $O(n)$.
|
[
"constructive algorithms",
"greedy"
] | 1,400
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> d(n);
for (auto &x : d) {
cin >> x;
}
vector<int> l(n), r(n);
for (int i = 0; i < n; ++i) {
cin >> l[i] >> r[i];
}
int left = 0;
vector<int> last;
for (int i = 0; i < n; ++i) {
if (d[i] == -1) {
last.push_back(i);
} else {
left += d[i];
}
while (left < l[i]) {
if (last.empty()) {
cout << -1 << '\n';
return;
}
d[last.back()] = 1;
++left;
last.pop_back();
}
while (left + last.size() > r[i]) {
if (last.empty()) {
cout << -1 << '\n';
return;
}
d[last.back()] = 0;
last.pop_back();
}
}
for (auto &x : d) {
cout << max(0, x) << " ";
}
cout << "\n";
return;
}
signed main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
|
2110
|
D
|
Fewer Batteries
|
In 2077, when robots took over the world, they decided to compete in the following game.
There are $n$ checkpoints, and the $i$-th checkpoint contains $b_i$ batteries. Initially, the Robot starts at the $1$-st checkpoint with no batteries and must reach the $n$-th checkpoint.
There are a total of $m$ one-way passages between the checkpoints. The $i$-th passage allows movement from point $s_i$ to point $t_i$ ($s_i < t_i$), but not the other way. Additionally, the $i$-th passage can only be used if the robot has at least $w_i$ charged batteries; otherwise, it will run out of power on the way.
When the robot arrives at point $v$, it can additionally take any number of batteries from $0$ to $b_v$, inclusive. Moreover, it always carries all previously collected batteries, and at each checkpoint, it recharges all previously collected batteries.
Find the minimum number of batteries that the robot can have at the end of the journey, or report that it is impossible to reach from the first checkpoint to the last.
|
We will build a directed graph: vertices are points, and edges are passages. We will perform a binary search on the answer. Suppose we are currently checking that $ans \leq mid$. Then we will perform the following dynamic programming: $dp_v$ - the maximum number of batteries that the robot can have when it is at vertex $v$. Initially, $dp_i = -\infty \space(i > 1), dp_1 = 0$. We will recalculate forward. Let's consider vertex $v$. First, we add $b_v$ to $dp_v$, as we could take batteries at vertex $v$. Then we set $dp_v = min(dp_v, mid)$, because we cannot have more than $mid$ batteries. Then, for each outgoing edge $v, u, w$, if $dp_v < w$, we cannot pass through this edge. Otherwise, $dp_u = max(dp_u, dp_v)$. If $dp_n > 0$, then $ans \leq mid$, otherwise not. The final asymptotic complexity is $O((n + m) \log W)$.
|
[
"binary search",
"dfs and similar",
"dp",
"graphs",
"greedy",
"hashing"
] | 1,700
|
#include <bits/stdc++.h>
using namespace std;
const int INF = 1e9 + 11;
struct edge {
int t, w;
edge(int t, int w) : t(t), w(w) {}
};
void solve() {
int n, m;
cin >> n >> m;
vector<int> b(n);
for (auto &x : b) {
cin >> x;
}
vector<vector<edge>> graph(n);
for (int i = 0; i < m; ++i) {
int s, t, w;
cin >> s >> t >> w;
--s; --t;
graph[s].push_back(edge(t, w));
}
auto check = [&](int maxW) {
vector<int> best(n, 0);
for (int i = 0; i < n; ++i) {
if (i > 0 && best[i] == 0) {
continue;
}
best[i] += b[i];
best[i] = min(best[i], maxW);
for (auto p : graph[i]) {
if (p.w <= best[i]) {
best[p.t] = max(best[p.t], best[i]);
}
}
}
return (best.back() > 0);
};
if (!check(INF)) {
cout << -1 << endl;
return;
}
int l = 0, r = INF;
while (r - l > 1) {
int mid = (l + r) / 2;
if (check(mid)) {
r = mid;
} else {
l = mid;
}
}
cout << r << endl;
}
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
|
2110
|
E
|
Melody
|
In 2077, the robots that took over the world realized that human music wasn't that great, so they started composing their own.
To write music, the robots have a special musical instrument capable of producing $n$ different sounds. Each sound is characterized by its volume and pitch. A sequence of sounds is called music. Music is considered beautiful if any two consecutive sounds differ either only in volume or only in pitch. Music is considered boring if the volume or pitch of any three consecutive sounds is the same.
You want to compose beautiful, \textbf{non}-boring music that contains each sound produced by your musical instrument exactly once.
|
Let's construct a bipartite graph, where all possible volumes of sounds are on the left and pitches are on the right. Then a pair of vertices $p,v$ will be connected by an edge if the musical instrument can produce the sound $(p,v)$. Notice that each path in such a graph represents $beautiful$ music. This is because any two adjacent edges share a common vertex, meaning they either have the same volume or the same pitch. Also, this music is not $boring$, as no three consecutive edges share any common vertices, meaning they cannot have a common volume or pitch. Now, notice that music consisting of all sounds is an Eulerian path in our graph. Thus, we have reduced our problem to the standard problem of finding an Eulerian path. The final asymptotic complexity is $O(n\space log\space n)$ or $O(n)$, depending on the implementation of the Eulerian path.
|
[
"dfs and similar",
"graphs",
"implementation"
] | 2,300
|
#include <bits/stdc++.h>
using namespace std;
vector<set<int>> graph;
vector<int> ans;
void dfs(int v) {
while (!graph[v].empty()) {
int u = *graph[v].begin();
graph[u].erase(v);
graph[v].erase(u);
dfs(u);
}
ans.push_back(v);
}
signed main() {
int t;
cin >> t;
while (t--) {
graph.clear();
ans.clear();
int n;
cin >> n;
map<int, int> p, v;
map<pair<int, int>, int> toIndex;
vector<pair<int, int>> part;
for (int i = 1; i <= n; ++i) {
int volume, pitch;
cin >> volume >> pitch;
toIndex[make_pair(volume, pitch)] = i;
if (p.count(pitch) == 0) {
p[pitch] = graph.size();
graph.push_back(set<int>());
part.push_back(make_pair(0, pitch));
}
if (v.count(volume) == 0) {
v[volume] = graph.size();
graph.push_back(set<int>());
part.push_back(make_pair(volume, 0));
}
graph[v[volume]].insert(p[pitch]);
graph[p[pitch]].insert(v[volume]);
}
int root = 0;
int cnt = 0;
for (int i = 0; i < graph.size(); ++i) {
if (graph[i].size() % 2 == 1) {
++cnt;
root = i;
}
}
dfs(root);
if (ans.size() != n + 1 || cnt > 2) {
cout << "NO" << endl;
continue;
}
vector<int> out;
for (int i = 0; i < n; ++i) {
auto p1 = part[ans[i]];
auto p2 = part[ans[i + 1]];
out.push_back(toIndex[make_pair(p1.first + p2.first, p1.second + p2.second)]);
if (out[i] == 0) {
out.clear();
break;
}
}
if (out.empty()) {
cout << "NO" << endl;
} else {
cout << "YES" << endl;
for (int i = 0; i < n; ++i) {
cout << out[i] << " ";
}
cout << endl;
}
}
return 0;
}
|
2110
|
F
|
Faculty
|
In 2077, after the world was enslaved by robots, the robots decided to implement an educational reform, and now the operation of taking the modulus is only taught in the faculty of "Ancient World History". Here is one of the entrance tasks for this faculty:
We define the beauty of an array of positive integers $b$ as the maximum $f(b_i, b_j)$ over all pairs $1 \leq i, j \leq n$, where $f(x, y) = (x \bmod y) + (y \bmod x)$.
Given an array of positive integers $a$ of length $n$, output $n$ numbers, where the $i$-th number ($1 \leq i \leq n$) is the beauty of the array $a_1, a_2, \ldots, a_i$.
$x \bmod y$ is the remainder of the division of $x$ by $y$.
|
To solve the problem, several facts need to be noted. Fact I. $f(x, y) \leq \max(x, y)$. If $x = y$, then $f(x, y) = 0$. Otherwise, let $x < y$. Then $x \mod y = x$. Thus, $f(x, y) = x + y - \lfloor{\frac{y}{x}}\rfloor \cdot x \leq y$, since $\lfloor{\frac{y}{x}}\rfloor \geq 1$. Fact II. We can only consider pairs with the maximum, that is, such $i, j$ that $b_i$ is the maximum element of $b$. Let $b_{max}$ be the maximum element of the array $b$. Suppose that $f(b_i, b_j)$ is greater than the found value. Let $b_i \leq b_j$. From Fact I, it follows that $f(b_i, b_j) \leq b_j$. Now consider $f(b_j, b_{max})$. Since $b_j < b_{max}$, we have $b_j \mod b_{max} = b_j$, which means $f(b_j, b_{max}) \geq b_j \geq f(b_i, b_j)$. Fact III. Let $b_i < b_j$. If $b_j < b_i \cdot 2$, then $f(b_i, b_j) = b_j$. This is true because $\lfloor{\frac{b_j}{b_i}}\rfloor = 1$, which means $f(b_i, b_j) = b_i + b_j - \lfloor{\frac{b_j}{b_i}}\rfloor \cdot b_i = b_j$. Now let's go through the prefixes one by one. We will maintain $b_{max}$ - the maximum element in the prefix. Suppose we are currently at prefix $i$. If $b_i \leq b_{max}$, then the maximum has not changed, and we can simply update the answer with $f(b_{max}, b_i)$ from Fact II. $O(1)$. If $b_{max} < b_i < b_{max} \cdot 2$, then we can say that the answer is $b_i$, since $f(b_i, b_{max}) = b_i$ from Fact III. $O(1)$. If $b_{max} \cdot 2 \leq b_i$, then we will simply iterate through all previous $j < i$ and update the answer with $f(b_i, b_j)$. $O(n)$. The number of such iterations will be no more than $log_2{A}$, since in each of them the maximum increases by a factor of $2$. The final asymptotic complexity is $O(n \log A)$, where $A$ is the maximum element of the array $a$.
|
[
"brute force",
"greedy",
"math",
"number theory"
] | 2,400
|
#include <bits/stdc++.h>
using namespace std;
int f(int x, int y) {
return (x % y) + (y % x);
}
void Solve() {
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
int ans = 0;
int mx = a[0];
for (int i = 0; i < n; ++i) {
ans = max(ans, f(mx, a[i]));
if (a[i] > mx) {
if (a[i] >= mx * 2) {
mx = a[i];
for (int j = 0; j < i; ++j) {
ans = max(ans, f(a[i], a[j]));
}
} else {
mx = a[i];
ans = mx;
}
}
cout << ans << ' ';
}
cout << endl;
}
int main() {
ios::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while (t--) {
Solve();
}
return 0;
}
|
2111
|
A
|
Energy Crystals
|
There are three energy crystals numbered $1$, $2$, and $3$; let's denote the energy level of the $i$-th crystal as $a_i$. Initially, all of them are discharged, meaning their energy levels are equal to $0$. Each crystal needs to be charged to level $x$ \textbf{(exactly $x$, not greater)}.
In one action, you can increase the energy level of any one crystal by any positive amount; however, the energy crystals are synchronized with each other, so an action can only be performed if the following condition is met afterward:
- for each pair of crystals $i$, $j$, it must hold that $a_{i} \ge \lfloor\frac{a_{j}}{2}\rfloor$.
What is the minimum number of actions required to charge all the crystals?
|
Let's relax the requirement that all crystals must be charged exactly to level $x$ and allow the energy level to rise above it, i.e., $a_{i} \ge x$ in the end. We will try to come up with a greedy algorithm to charge the crystals. The simplest idea is to take the crystal with the minimum energy level and charge it as much as possible. If all three crystals have $a_{i} \ge x$, then we have our answer. The energy level will change as follows: $[0, 0, 0] \to [\color{red}{1}, 0, 0] \to [1, \color{red}{1}, 0] \to [1, 1, \color{red}{3}] \to [1, \color{red}{3}, 3] \to [\color{red}{7}, 3, 3] \to \dots$ Each action takes a crystal and charges it to level $2 \cdot m + 1$, where $m$ is the minimum charge of the other two crystals at that moment. Note that if we apply the action to a crystal, its energy level increases by at least $2$; therefore, the answer can be obtained in $O(\log x)$ actions. It turns out that this algorithm already solves the problem, and the condition that the energy level must be exactly $x$ does not affect anything. This is because, at the last moment when the greedy algorithm is about to charge some crystal to level: $2 \cdot m + 1 > x$ We can instead set it to exactly $x$. Let's check why this is permissible and does not change the number of moves: The state will be $[m, m, x]$, where $m < x$ and $2 \cdot m + 1 > x$, so the next two actions can be as follows: $[m, m, x] \to [\color{red}{x}, m, x] \to [x, \color{red}{x}, x]$ Meanwhile, the greedy algorithm would also perform two actions: $[m, m, 2 \cdot m + 1] \to [\color{red}{2 \cdot m + 1}, m, 2 \cdot m + 1] \to [2 \cdot m + 1, \color{red}{4 \cdot m + 3}, 2 \cdot m + 1]$
|
[
"greedy",
"implementation",
"math"
] | 800
|
#include<bits/stdc++.h>
using namespace std;
#define int long long
void solve(){
int x;
cin >> x;
int ans = 0;
int a1 = 0, a2 = 0, a3 = 0;
while(min({a1, a2, a3}) < x){
if (a1 <= a2 && a1 <= a3){
a1 = min(a2, a3) * 2 + 1;
}
else if (a2 <= a1 && a2 <= a3){
a2 = min(a1, a3) * 2 + 1;
}
else{
a3 = min(a1, a2) * 2 + 1;
}
ans++;
}
cout << ans << '\n';
}
signed main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int tests = 1;
cin >> tests;
for (int test = 0; test < tests; test++){
solve();
}
return 0;
}
|
2111
|
B
|
Fibonacci Cubes
|
There are $n$ Fibonacci cubes, where the side of the $i$-th cube is equal to $f_{i}$, where $f_{i}$ is the $i$-th Fibonacci number.
In this problem, the Fibonacci numbers are defined as follows:
- $f_{1} = 1$
- $f_{2} = 2$
- $f_{i} = f_{i - 1} + f_{i - 2}$ for $i > 2$
There are also $m$ empty boxes, where the $i$-th box has a width of $w_{i}$, a length of $l_{i}$, and a height of $h_{i}$.
For each of the $m$ boxes, you need to determine whether all the cubes can fit inside that box. The cubes must be placed in the box following these rules:
- The cubes can only be stacked in the box such that the sides of the cubes are parallel to the sides of the box;
- Every cube must be placed either on the bottom of the box or on top of other cubes in such a way that all space below the cube is occupied;
- A larger cube cannot be placed on top of a smaller cube.
|
Notice the following: if we can fit the two largest cubes from the set into the box, then we can also fit all the smaller cubes into it. To fit the two largest cubes, it is sufficient that all sides of the box are at least $f_{n}$, and the larger of the sides of the box is at least $f_{n + 1}$. The first condition is quite obvious; if it were not satisfied, we would not be able to fit even the largest cube into the box. The second condition, however, is a bit more interesting. Let's simplify the problem a bit and remove one dimension, meaning all cubes will turn into squares. We will check that the rectangle $f_{n} \times f_{n + 1}$ can accommodate all these squares. This resembles a picture where a Fibonacci spiral is drawn: Each time we draw a square with a side of $f_{i}$, the remaining area turns into a rectangle with sides $f_{i}$ and $f_{i - 1}$. Now, if we add a third dimension $f_{n}$ to the rectangle and $f_{i}$ to squares, all cubes will also remain within the rectangular parallelepiped. The rectangle could serve as either the bottom or one of the side faces. Thus, if the rectangle was a side face, then in the picture presented above, some larger cubes would be resting on smaller cubes, which is not allowed. However, if we redraw the squares in the rectangle slightly differently, all requirements will be met: For example, like this
|
[
"brute force",
"dp",
"implementation",
"math"
] | 1,100
|
#include<bits/stdc++.h>
using namespace std;
#define int long long
void solve(){
int n, m;
cin >> n >> m;
vector<vector<int>> a(m);
for (int i = 0; i < m; i++){
for (int j = 0; j < 3; j++){
int x;
cin >> x;
a[i].emplace_back(x);
}
sort(a[i].begin(), a[i].end());
}
vector<int> fib(n + 5);
fib[0] = 1;
fib[1] = 2;
for (int i = 2; i < n + 1; i++){
fib[i] = fib[i - 1] + fib[i - 2];
}
for (int i = 0; i < m; i++){
if (a[i][0] >= fib[n - 1] && a[i][1] >= fib[n - 1] && a[i][2] >= fib[n]){
cout << '1';
}
else{
cout << '0';
}
}
cout << '\n';
}
signed main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int tests = 1;
cin >> tests;
for (int test = 0; test < tests; test++){
solve();
}
return 0;
}
|
2111
|
C
|
Equal Values
|
You are given an array $a_1, a_2, \dots, a_n$, consisting of $n$ integers.
In one operation, you are allowed to perform one of the following actions:
- Choose a position $i$ ($1 < i \le n$) and make all elements to the left of $i$ equal to $a_i$. That is, assign the value $a_i$ to all $a_j$ ($1 \le j < i$). The cost of such an operation is $(i - 1) \cdot a_i$.
- Choose a position $i$ ($1 \le i < n$) and make all elements to the right of $i$ equal to $a_i$. That is, assign the value $a_i$ to all $a_j$ ($i < j \le n$). The cost of such an operation is $(n - i) \cdot a_i$.
Note that the elements affected by an operation may already be equal to $a_i$, but that doesn't change the cost.
You are allowed to perform any number of operations (including zero). What is the minimum total cost to make all elements of the array equal?
|
The first minor observation is that the cost of both operations is the value of the element times the number of the elements affected by the operation. Let's also expand the scope of both operations so that we can apply the first operation to the first element (with cost $0$, since $0$ elements are affected) and the second operation to the last element. Fix the value $x$ that will be the final value of all elements of the array. Obviously, it only makes sense to apply the operations to the elements equal to $x$. Let's consider the operations of the first and second types separately. Note that if we apply an operation that changes values to the left at position $a$, and an operation that changes values to the right at position $b$, with $a > b$, we can always reduce the total cost by applying both operations to either $a$ or $b$. This means that there exists an optimal solution in which operations of the first type never affect elements changed by operations of the second type. Now, let's consider only the operations that change elements to the left. Suppose one operation is applied at position $a$, and another at position $b$. If instead of them we apply one operation at position $\max(a, b)$, the effect will be exactly the same. Similarly, we can show this for operations that change elements to the right. Thus, it is sufficient to apply one operation of each type. Let the operation to the left be applied at position $a$, and the operation to the right at position $b$ (with $a \le b$). All elements between $a$ and $b$ must be equal to $x$, because they are not affected by the operations. The total cost of such operations will be equal to $(a - 1) \cdot x + (n - b) \cdot x = x \cdot (n - 1 - (b - a))$. So, the answer depends on the distance between $a$ and $b$ and the value of $x$. Since all values between $a$ and $b$ must be equal to $x$, they form a continuous block of identical values. Therefore, to find the optimal answer, it is sufficient to iterate over the blocks of identical values in the array and choose the minimum answer from them. This can be done using the two-pointer method. Overall complexity: $O(n)$ per test case.
|
[
"brute force",
"greedy",
"two pointers"
] | 1,100
|
for _ in range(int(input())):
n = int(input())
a = list(map(int, input().split()))
i = 0
ans = 10**18
while i < n:
j = i
while j < n and a[j] == a[i]:
j += 1
ans = min(ans, (i + n - j) * a[i])
i = j
print(ans)
|
2111
|
D
|
Creating a Schedule
|
A new semester is about to begin, and it is necessary to create a schedule for the first day. There are a total of $n$ groups and $m$ classrooms in the faculty. It is also known that each group has exactly $6$ classes on the first day, and the $k$-th class of each group takes place at the same time. Each class must be held in a classroom, and at the same time, there cannot be classes for more than one group in the same classroom.
Each classroom has its own index (at least three digits), and all digits of this index, except for the last two, indicate the floor on which the classroom is located. For example, classroom $479$ is located on the $4$-th floor, while classroom $31415$ is on the $314$-th floor. Between floors, one can move by stairs; for any floor $x > 1$, one can either go down to floor $x - 1$ or go up to floor $x + 1$; from the first floor, one can only go up to the second; from the floor $10^7$ (which is the last one), it is possible to go only to the floor $9999999$.
The faculty's dean's office has decided to create the schedule in such a way that students move as much as possible between floors, meaning that \textbf{the total number of movements between floors across all groups should be maximized}. When the students move from one floor to another floor, they take the shortest path.
For example, if there are $n = 2$ groups and $m = 4$ classrooms $[479, 290, 478, 293]$, the schedule can be arranged as follows:
\begin{center}
\begin{tabular}{|c||c||c|}
\hline
\textbf{Class No.} & \textbf{Group 1} & \textbf{Group 2} \
\hline
\hline
$1$ & $290$ & $293$ \
\hline
\hline
$2$ & $478$ & $479$ \
\hline
\hline
$3$ & $293$ & $290$ \
\hline
\hline
$4$ & $479$ & $478$ \
\hline
\hline
$5$ & $293$ & $290$ \
\hline
\hline
$6$ & $479$ & $478$ \
\hline
\end{tabular}
\end{center}
In such a schedule, the groups will move between the $2$nd and $4$th floors each time, resulting in a total of $20$ movements between floors.
Help the dean's office create any suitable schedule!
|
Let $f_{i,j}$ be the floor where the $i$-th group has its $j$-th class. Then, the number of moves between floors can be expressed as follows: $\sum\limits_{j=1}^{5} \sum\limits_{i=1}^{n} |f_{i,j} - f_{i,j+1}|$ Using the fact that $|x-y| = \max(x,y) - \min(x,y)|$, we can rewrite it as follows: $\sum\limits_{j=1}^{5} \sum\limits_{i=1}^{n} max(f_{i,j}, f_{i,j+1}) - min(f_{i,j}, f_{i,j+1})$ Or as follows: $\sum\limits_{j=1}^{5} \sum\limits_{i=1}^{n} max(f_{i,j}, f_{i,j+1}) - \sum\limits_{j=1}^{5} \sum\limits_{i=1}^{n} min(f_{i,j}, f_{i,j+1})$ Let's analyze how big can we make the first part of this expression, and how small can we make the second part of this expression. Every auditorium can be used at most twice during two consecutive classes, so if the number of groups is even, then $\sum\limits_{i=1}^{n} max(f_{i,j}, f_{i,j+1})$ cannot be greater than the sum of $\frac{n}{2}$ maximum floors, multiplied by $2$. If $n$ is odd, then we also add the $(\frac{n}{2}+1)$-th maximum floor. Similarly, if $n$ is even, then $\sum\limits_{i=1}^{n} min(f_{i,j}, f_{i,j+1})$ cannot be less than the sum of $\frac{n}{2}$ minimum floors, multiplied by $2$. If $n$ is odd, then we also add the $(\frac{n}{2}+1)$-th minimum floor. This is a bound on the number of movements; if we achieve it, our answer will be optimal. Achieving it is actually not that difficult. Suppose the list of auditoriums is sorted (sorting it also sorts the list of floors where the auditoriums are located). If $n$ is even, we can split all groups into two parts of sizes $\frac{n}{2}$. The first half will have their first classes in the $\frac{n}{2}$ lowest auditoriums, then go to the $\frac{n}{2}$ highest auditoriums, then return to $\frac{n}{2}$ lowest auditoriums, and so on. The second half will do the opposite: start in the $\frac{n}{2}$ highest auditoriums, then go to the $\frac{n}{2}$ lowest auditoriums, then return, and so on. If $n$ is odd, then the "extra" group can use the $(\frac{n}{2} + 1)$-th lowest auditorium for odd-indexed classes and $(\frac{n}{2} + 1)$-th highest auditorium for even-indexed classes. This solution achieves the bound which we proved earlier, so it is optimal.
|
[
"constructive algorithms",
"data structures",
"greedy",
"implementation",
"sortings"
] | 1,400
|
#include<bits/stdc++.h>
using namespace std;
void solve(){
int n, m;
cin >> n >> m;
vector<int> a(m);
for (int i = 0; i < m; i++){
cin >> a[i];
}
sort(a.begin(), a.end());
vector<vector<int>> ans(n, vector<int> (6));
for (int i = 0; i < n; i += 2){
if (i + 1 == n){
for (int j = 0; j < 6; j++){
if (j % 2 == 0){
ans[i][j] = a[i / 2];
}
else{
ans[i][j] = a[m - i / 2 - 1];
}
}
}
else{
for (int j = 0; j < 6; j++){
if (j % 2 == 0){
ans[i][j] = a[i / 2];
ans[i + 1][j] = a[m - i / 2 - 1];
}
else{
ans[i][j] = a[m - i / 2 - 1];
ans[i + 1][j] = a[i / 2];
}
}
}
}
for (int i = 0; i < n; i++){
for (int j = 0; j < 6; j++){
cout << ans[i][j] << ' ';
}
cout << '\n';
}
}
signed main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int tests = 1;
cin >> tests;
for (int test = 0; test < tests; test++){
solve();
}
return 0;
}
|
2111
|
E
|
Changing the String
|
Given a string $s$ that consists only of the first three letters of the Latin alphabet, meaning each character of the string is either a, b, or c.
Also given are $q$ operations that need to be performed on the string. In each operation, two letters $x$ and $y$ from the set of the first three letters of the Latin alphabet are provided, and for each operation, one of the following two actions must be taken:
- change any (one) occurrence of the letter $x$ in the string $s$ to the letter $y$ (if at least one occurrence of the letter $x$ exists);
- do nothing.
The goal is to perform all operations in the given order in such a way that the string $s$ becomes lexicographically minimal.
Recall that a string $a$ is lexicographically less than a string $b$ if and only if one of the following conditions holds:
- $a$ is a prefix of $b$, but $a \neq b$;
- at the first position where $a$ and $b$ differ, the string $a$ has a letter that comes earlier in the alphabet than the corresponding letter in $b$.
|
First, let's try to understand what operations and sequences of operations make sense to perform. Each letter can be transformed at most two times: if we change it more than twice, then at two moments in time it will be the same, and all operations between them can be omitted. Clearly, there is no point in transforming the letter a at all, as it cannot be made lexicographically smaller. The letter b can only be transformed into the letter a, either directly or indirectly through the letter c. The letter c can be transformed into both the letter a (directly or indirectly through the letter b) or the letter b (but only directly; there is no point in transforming it into a to then transform it into b). This means that we are actually interested in the following sequences of transformations: $b \rightarrow a$; $b \rightarrow c \rightarrow a$; $c \rightarrow a$; $c \rightarrow b$; $c \rightarrow b \rightarrow a$. Moreover, there is no point in combining sequences of types $2$ and $5$ (the only ones with two operations) because if there are two sequences of types $2$ and $5$, we can instead transform them into two sequences of types $1$ and $3$. Now let's try to actually solve the problem. We will go through the string from left to right and for each character, we will try to transform it into the smallest possible character (considering that we still need to transform characters in the prefix and that some operations may be unavailable). When we encounter the character b, let's first try to build a sequence of type $1$ for it, and if that doesn't work, a sequence of type $2$. When we encounter the character c, we will first try to build a sequence of type $3$, if that doesn't work, then type $5$, and if that still doesn't work, type $4$. To build the sequences, we will keep a set of all queries for each possible transformation of one character into another; if there is only one transformation in the sequence, we will take the earliest one; if there are two, we will take the earliest of the first type and use lower_bound to find the earliest of the second type that can be taken with it. Let's prove that this greedy approach works. To do this, we will analyze two "dangerous" moments in our greedy approach: What if transforming characters into a directly is not always beneficial? We try to transform them directly first and only then indirectly; what if this is incorrect? What if our choice of the earliest operations for transformations is not correct because it may prohibit us from performing some sequence of operations later? We will prove that point $1$ is not a problem. Suppose in the optimal solution we transformed some character $s_i$ into a indirectly, although at that moment we could have done it directly. If we still have unused operations to transform $s_i$ into a directly, we can use the direct transformation instead of the indirect one, and the answer will not change. Otherwise, suppose we used that direct transformation on character $s_j$ ($j > i$). If $s_i = s_j$, we can "swap" them, and the answer will not change. If $s_i \ne s_j$, this means that we indirectly transformed the character b into a and the character c into a - and we previously proved that in the optimal solution, we can ignore such cases. Now let's prove that point $2$ is also not a problem. If we use a single transformation of character b or c into a, it is always beneficial to take the earliest operation of that type: later operations of that type may be needed for indirect transformations (sequences of transformations of types $2$ or $5$), and in such sequences, the operation of transforming into a is the second one, so it is beneficial to leave later operations of transforming into a for such sequences. We can also use a single transformation from c to b, but we only use it when the operations for transforming b into a have either already ended or the last such operation occurs later than the first operation of transforming c into b, so we definitely cannot use the transformation from c to b in a sequence of two operations, and we can take any such operation (including the earliest one). Therefore, the solution can be implemented as follows: For each type of operation, form a set of all queries in which such an operation can be performed. Go through the string from left to right and greedily transform the current character into the smallest possible character. First, we will try to transform it into a (first directly, then indirectly); if that doesn't work, then into b. In each transformation, we will use the earliest operation (if two operations are needed, the earliest operation of the first type and the earliest operation of the second type that comes after the first). The complexity of the solution is $O((n + q) \log q)$.
|
[
"binary search",
"data structures",
"greedy",
"implementation",
"sortings",
"strings"
] | 1,900
|
#include<bits/stdc++.h>
using namespace std;
#define int long long
void solve(){
int n, q;
cin >> n >> q;
string s;
cin >> s;
vector<vector<set<int>>> st(3, vector<set<int>> (3));
for (int i = 0; i < q; i++){
char x, y;
cin >> x >> y;
st[x - 'a'][y - 'a'].insert(i);
}
for (int i = 0; i < n; i++){
if (s[i] == 'a'){
continue;
}
if (s[i] == 'b'){
if (!st[1][0].empty()){
st[1][0].erase(st[1][0].begin());
s[i] = 'a';
continue;
}
if (!st[1][2].empty()){
auto ind = *st[1][2].begin();
auto lb = st[2][0].lower_bound(ind);
if (lb != st[2][0].end()){
st[1][2].erase(ind);
st[2][0].erase(lb);
s[i] = 'a';
continue;
}
}
}
if (s[i] == 'c'){
if (!st[2][0].empty()){
st[2][0].erase(st[2][0].begin());
s[i] = 'a';
continue;
}
if (!st[2][1].empty()){
auto ind = *st[2][1].begin();
st[2][1].erase(ind);
s[i] = 'b';
auto lb = st[1][0].lower_bound(ind);
if (lb != st[1][0].end()){
st[1][0].erase(lb);
s[i] = 'a';
}
}
}
}
cout << s << '\n';
}
signed main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int tests = 1;
cin >> tests;
for (int test = 0; test < tests; test++){
solve();
}
return 0;
}
|
2111
|
F
|
Puzzle
|
You have been gifted a puzzle, where each piece of this puzzle is a square with a side length of one. You can glue any picture onto this puzzle, cut it, and obtain an almost ordinary jigsaw puzzle.
Your friend is an avid mathematician, so he suggested you consider the following problem. Is it possible to arrange the puzzle pieces in such a way that the following conditions are met:
- the pieces are aligned parallel to the coordinate axes;
- the pieces do not overlap each other;
- all pieces form a single connected component (i.e., there exists a path from each piece to every other piece along the pieces, where each two consecutive pieces share a side);
- the ratio of the perimeter of this component to the area of this component equals $\frac{p}{s}$;
- the number of pieces used does not exceed $50\,000$.
Can you handle it?
\begin{center}
{\small For this figure, the ratio of the perimeter to the area is $\frac{11}{9}$}
\end{center}
|
If a figure consists of a single piece, then its perimeter-to-area ratio is $4$ to $1$. In fact, this is the maximum ratio that can be achieved. That is, if $\frac{p}{s} > 4$, the answer is $-1$. Now we need to understand what other ratios can exist: Suppose we have placed one piece; let's add one neighboring piece to it. Notice that if the next unit piece touches one side of our figure, the area increases by $1$, and the perimeter increases by $2$. Thus, if we add $x$ pieces to our very first one in this manner, the ratio will be: $\frac{4 + 2 \cdot x}{1 + x}$ This can be transformed into the following form: $\frac{2 + (2 + 2 \cdot x)}{1 + x} = \frac{2}{1 + x} + \frac{2 \cdot (x + 1)}{x + 1} = \frac{2}{1 + x} + 2$ This means that if the ratio is $> 2$ and can be represented in this form, then we can draw it as a strip figure, that is, a rectangle $1 \times (x + 1)$. Now let's imagine a situation where we added $x$ pieces in the manner described above, and then we add a piece that touches two or more sides of our figure. In this case, the area increases by $1$, but the perimeter does not increase: $\frac{4 + 2 \cdot x + 0}{1 + x + 1} = \frac{2 \cdot (2 + x)}{2 + x} = 2$ Thus, if we constructed the figure in this way, the ratio $\frac{p}{s} \le 2$. This means that if the ratio cannot be represented in the form $\frac{2}{1 + x} + 2$ and is greater than two, the answer is also $-1$. We are left to deal with the case when $\frac{p}{s} \le 2$. In fact, when it equals two, we can draw a square with a side of $2$, and that will be our answer. If $\frac{p}{s} < 2$, then any figure with such a ratio can be obtained. The ratios can be divided into two cases: when the numerator is even and when it is odd. When the numerator is odd, it is straightforward: we remember that if we place a piece on some figure so that it touches only one side, the perimeter increases by $2$, and the area by $1$. So let's subtract $2$ from $p$ and $1$ from $s$ until we get $\frac{1}{s}$. Such a ratio can easily be drawn as a square with a side of $4 \cdot s$. Thus, any ratio in this case can be represented as a square to which a strip has been added. When the numerator is even, let's subtract $2$ from $p$ and $1$ from $s$ until we get the ratio $\frac{2}{s}$. This ratio can also be represented as a square with a side of $2 \cdot s$. Therefore, all figures with a ratio $\frac{p}{s} \le 2$ can be drawn as a square to which a strip has been added. Like this Let's estimate the number of pieces used in such a solution. Obviously, the worst case will be when we need to draw a large square, for example, for the ratio $\frac{p}{s} = \frac{1}{50}$ or $\frac{p}{s} = \frac{2}{49}$. But even in such cases, $16 \cdot s^{2} = 40\,000$ and $4 \cdot s^{2} = 9\,604$ pieces are needed, respectively.
|
[
"brute force",
"constructive algorithms",
"greedy",
"math"
] | 2,400
|
#include<bits/stdc++.h>
using namespace std;
#define int long long
void solve(){
int p, s;
cin >> p >> s;
if (p > 4 * s){
cout << "-1\n";
return;
}
if (p > 2 * s){
for (int i = 1; i <= 50000; i++){
if (p * i == (2 + 2 * i) * s){
cout << i << '\n';
for (int j = 0; j < i; j++){
cout << "0 " << j << '\n';
}
return;
}
}
cout << "-1\n";
return;
}
int k = 0;
while(p > 2){
p -= 2;
s -= 1;
k++;
}
if (p == 2){
cout << 4 * s * s + 4 * s * k << '\n';
for (int i = 0; i < 2 * s; i++){
for (int j = 0; j < 2 * s; j++){
cout << i << ' ' << j << '\n';
}
}
for (int i = 0; i < 4 * s * k; i++){
cout << 0 << ' ' << 2 * s + i << '\n';
}
}
else{
cout << 16 * s * s + 16 * s * k << '\n';
for (int i = 0; i < 4 * s; i++){
for (int j = 0; j < 4 * s; j++){
cout << i << ' ' << j << '\n';
}
}
for (int i = 0; i < 16 * s * k; i++){
cout << 0 << ' ' << 4 * s + i << '\n';
}
}
}
signed main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
int tests = 1;
cin >> tests;
for (int test = 0; test < tests; test++){
solve();
}
return 0;
}
|
2111
|
G
|
Divisible Subarrays
|
\textbf{Technically, this is an interactive problem.}
An array $a$ of $m$ numbers is called divisible if at least one of the following conditions holds:
- There exists an index $i$ ($1 \le i < m$) and an integer $x$ such that for all indices $j$ ($j \le i$), it holds that $a_{j} \le x$ and for all indices $k$ ($k > i$), it holds that $a_{k} > x$.
- There exists an index $i$ ($1 \le i < m$) and an integer $x$ such that for all indices $j$ ($j \le i$), it holds that $a_{j} > x$ and for all indices $k$ ($k > i$), it holds that $a_{k} \le x$.
You are given a permutation $p$ of integers $1, 2, \dots, n$. Your task is to answer queries of the following form fast: if we take only the segment [$l$, $r$] from the permutation, that is, the numbers $p_{l}, p_{l + 1}, \dots, p_{r}$, is this subarray of numbers divisible?
Queries will be submitted in interactive mode in groups of $10$, meaning you will not receive the next group of queries until you output all answers for the current group.
|
Solution 1: Note that you are asked to solve the problem "online". This means there is some simple "offline" solution that is being cut off. In other words, the problem can be solved more easily if you could answer the queries in an arbitrary order. Let's iterate over $x$ in an increasing order. Maintain the following binary array $b$: $b_i$ is equal to $0$ if $p_i \le x$, and $1$ if $p_i > x$. When you move from $x$ to $x+1$, find $x+1$ in the permutation and change the corresponding value in $b$ from $1$ to $0$. Then, the segment from $l$ to $r$ is divisible by the value $x$ if the corresponding subsegment in $b$ is of the form $000\dots0011\dots111$ or $111\dots1100\dots000$, where the number of $0$s and $1$s is strictly greater than $0$. Let's consider blocks of consecutive zeros and ones (maximal by inclusion) in the array $b$. Now you can say that $l$ and $r$ must belong to neighboring blocks. Note that when moving from $x$ to $x+1$, the number of blocks in the array changes by at most $2$. That is, for all $n$ possible values of $x$, there are $O(n)$ different blocks in total. The blocks themselves can be maintained in a set of pairs. The blocks of equal elements look like $\{(L_1, R_1], (L_2, R_2], \dots, (L_k, R_k]\}$. For convenience, let's store the blocks as half-intervals. Each time the blocks change, you should write down new neighboring pairs in the form of triples $(L_i, L_{i+1}, R_{i+1})$. In this interpretation, the segment $[l, r]$ is divisible if there exists a triple $(L, M, R)$ such that $L \le l < M \le r < R$. This check can be performed using a sweep line. For each triple, let's add two events: at time $L$, the triple is turned on; at time $M$, the triple is turned off. Let's process the events in order of increasing time. The query $[l, r]$ can be answered at time $l$. Each triple $(L, M, R)$, that is currently on, generates a half-interval $[M, R)$. So, you need to check if there is at least one half-interval that includes index $r$. This can be implemented using a segment tree. When the triple is turned on, you add $1$ on the half-interval $[M, R)$. When it is turned off, you subtract $1$. To check, you can query the value at point $r$. If the value is $0$, the answer is "NO"; otherwise, it is "YES". This algorithm solves the problem "offline" because it answers queries not in the order of the input but in the order of increasing $l$. You can apply the classic trick to convert the problem to "online". At each moment of time, you are only interested in the state of the segment tree, so you can make the tree persistent, saving a version at each moment of time. Now you can answer the queries "online". Overall complexity: $O((n + q) \log n)$. Solution 2: Without loss of generality, let's check only the first condition. To check the second one, you can reverse the array (and transform $l$ and $r$ accordingly) and check the first condition. Now let's restate the condition as follows: there exists an index $i$ such that the maximum to the left of $i$ (inclusive) is less than or equal to the minimum to the right of $i$ (exclusive). Let's learn to check if the array $a$ is divisible in a smarter way than by iterating through all possible $i$ and checking the maximum and minimum. Let's fix the index of the maximum in the left half, let it be $L$. Where can the cut $i$ be? All values in the right half must be greater than $a_L$. That is, let's find the index of the nearest number to the right of $L$ that is greater than $a_L$. Let this be index $j$. Then all values on indices from $L$ to $j$ (exclusive) must belong to the left half. The index $j$ cannot belong to the left half, as $a_j > a_L$, and it's been fixed that $a_L$ is the maximum. Thus, for each prefix maximum (since the left half is a prefix of the array $a$), there is only one candidate for the cut -the index before the next prefix maximum. First, let's precompute the index of the nearest greater number to the right for each element. This can be done using a monotonic stack by traversing from right to left. Let's call this $\mathit{nxt}_t$ for each index $t$. Then the solution should go as follows. Let's try to answer the query. Find the first and second prefix maxima on the segment $[l, r]$. Let them be at indices $t$ and $\mathit{nxt}_t$, respectively. Check that $\mathit{nxt}_t$ is a valid cut for $[l, r]$, by ensuring that all values on indices from $\mathit{nxt}_t$ to $r$ are strictly greater than $a_t$. If this is not the case, move on to the second and third prefix maxima ($\mathit{nxt}_t$ and $\mathit{nxt}_{\mathit{nxt}_t}$). And so on. If the next prefix maximum lies to the right of $r$, and no pair of neighboring prefix maxima fits, then the answer is "NO". Obviously, you can make $O(n)$ of these jumps in the worst case, so for now, the solution is too slow. Ideally, you would like to replace trivial jumps with the binary lifting to answer queries in $O(\log n)$. However, for this, you want to have a condition that can be used to check the cuts in bulk. That is, to perform the check simultaneously for $2^k$ cuts. Such a condition can be stated as follows. Once again, let there be a prefix maximum at index $t$ and the next prefix maximum at index $\mathit{nxt}_t$. Then for all indices from $\mathit{nxt}_t$ to $r$, all values must be greater than $a_t$. In other words, the nearest number to the right of $\mathit{nxt}_t$ that is less than $a_t$ must lie strictly to the right of $r$. Great, now you can save this index of the nearest number in the binary lifting. Let's call it $\mathit{val}_t$. To check many cuts at once, you can do the following: if at least one of the values $\mathit{val}_t$ is greater than $r$, then the answer is "YES". That is, if the maximum of them is greater than $r$, then the answer is "YES". Thus, you can also save the maximum in the binary lifting. The only thing left is to learn how to find the nearest number to the right of $\mathit{nxt}_t$ that is less than $a_t$. This can also be done using a monotonic stack with a sweep line. In the first pass, let's find the values $\mathit{nxt}_t$ for all $t$ with an increasing monotonic stack (the next larger, the next larger after that, and so on). In the second pass from right to left, maintain a decreasing monotonic stack. To find the nearest number to the right of $\mathit{nxt}_t$ that is less than $a_t$, you can perform a binary search on the stack built for index $\mathit{nxt}_t$. After these two passes with the monotonic stack, you finally have all the information needed for the binary lifting. Calculate the binary lifting, while saving the maximum of $\mathit{val}$ over $2^k$ jumps. Then you can answer the query as follows. Start from index $l$. Let $\mathit{nxt}[l][k]$ be the index after $2^k$ jumps, and $\mathit{val}[l][k]$ be the minimum over the corresponding $2^k$ jumps. If $\mathit{nxt}[l][k] > r$, decrease $k$. If $k = -1$, the answer is "NO". Otherwise, if $\mathit{val}[l][k] > r$, the answer is "YES". Otherwise, set $l = \mathit{nxt}[l][k]$ and continue checking. Overall complexity: $O((n + q) \log n)$.
|
[
"binary search",
"bitmasks",
"brute force",
"data structures",
"interactive"
] | 2,900
|
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); ++i)
const int INF = 1e9;
const int LOGN = 18;
struct solver{
int n;
vector<int> p;
solver(const vector<int> &p) : p(p){
n = p.size();
build();
}
vector<int> nxtmn, nxtmx;
vector<vector<int>> up;
vector<vector<int>> mx;
void build(){
nxtmx.resize(n);
vector<int> stmn, stmx;
for (int i = n - 1; i >= 0; --i){
while (!stmx.empty() && p[stmx.back()] < p[i])
stmx.pop_back();
nxtmx[i] = (stmx.empty() ? n : stmx.back());
stmx.push_back(i);
}
vector<vector<int>> qr(n + 1);
forn(i, n) qr[nxtmx[i]].push_back(i);
nxtmn.assign(n, n);
for (int i = n - 1; i >= 0; --i){
while (!stmn.empty() && p[stmn.back()] > p[i])
stmn.pop_back();
stmn.push_back(i);
for (int j : qr[i]){
int l = 0, r = int(stmn.size()) - 1;
while (l <= r){
int m = (l + r) / 2;
if (p[stmn[m]] < p[j]){
nxtmn[j] = stmn[m];
l = m + 1;
}
else{
r = m - 1;
}
}
}
}
up.assign(n + 1, vector<int>(LOGN));
mx.assign(n + 1, vector<int>(LOGN));
forn(i, n){
up[i][0] = nxtmx[i];
mx[i][0] = nxtmn[i];
}
up[n][0] = n;
mx[n][0] = 0;
for (int j = 1; j < LOGN; ++j) forn(i, n + 1){
up[i][j] = up[up[i][j - 1]][j - 1];
mx[i][j] = max(mx[i][j - 1], mx[up[i][j - 1]][j - 1]);
}
}
bool query(int l, int r){
int v = l;
for (int i = LOGN - 1; i >= 0; --i){
if (up[v][i] >= r) continue;
if (mx[v][i] >= r) return true;
v = up[v][i];
}
return false;
}
};
int main(){
#ifdef _DEBUG
freopen("input.txt", "r", stdin);
#endif
cin.tie(0);
ios::sync_with_stdio(false);
int n;
cin >> n;
vector<vector<int>> p(2, vector<int>(n));
forn(i, n){
cin >> p[0][i];
--p[0][i];
}
p[1] = p[0];
reverse(p[1].begin(), p[1].end());
solver p0(p[0]);
solver p1(p[1]);
int m;
cin >> m;
forn(i, m){
if (i % 10 == 0){
cout.flush();
}
int l, r;
cin >> l >> r;
--l; -- r;
bool ans = p0.query(l, r + 1) || p1.query(n - r - 1, n - l);
cout << (ans ? "YES\n" : "NO\n");
}
cout.flush();
}
|
2113
|
A
|
Shashliks
|
You are the owner of a popular shashlik restaurant, and your grill is the heart of your kitchen. However, the grill has a peculiarity: after cooking each shashlik, its temperature drops.
You need to cook as many portions of shashlik as possible, and you have an unlimited number of portions of two types available for cooking:
- The first type requires a temperature of at least $a$ degrees at the start of cooking, and after cooking, the grill's temperature decreases by $x$ degrees.
- The second type requires a temperature of at least $b$ degrees at the start of cooking, and after cooking, the grill's temperature decreases by $y$ degrees.
Initially, the grill's temperature is $k$ degrees. Determine the maximum total number of portions of shashlik that can be cooked.
Note that the grill's temperature can be negative.
|
Notice that it is beneficial for us to cook the portion of barbecue that we can prepare and, at the same time, after cooking it, the temperature drops by the smallest amount. Indeed, the higher the current temperature of the grill, the more portions of barbecue we can prepare. Therefore, the complete solution is as follows-let us cook the maximum possible number of portions of barbecue for which the decrease in grill temperature after cooking is minimal. After that, we cook the maximum possible number of portions of the second type of barbecue.
|
[
"greedy",
"math"
] | 800
|
#include <iostream>
using namespace std;
void solve() {
int t, a, b, x, y;
cin >> t >> a >> b >> x >> y;
auto solve = [&](int t, int a, int b, int x, int y) {
int cur = 0;
cur += max((t - a + x) / x, 0);
t -= max((t - a + x) / x, 0) * x;
cur += max((t - b + y) / y, 0);
return cur;
};
cout << max(solve(t, a, b, x, y), solve(t, b, a, y, x)) << endl;
}
signed main() {
int q = 1;
cin >> q;
while (q --> 0)
solve();
return 0;
}
|
2113
|
B
|
Good Start
|
The roof is a rectangle of size $w \times h$ with the bottom left corner at the point $(0, 0)$ on the plane. Your team needs to completely cover this roof with identical roofing sheets of size $a \times b$, with the following conditions:
- The sheets cannot be rotated (not even by $90^\circ$).
- The sheets must not overlap (but they can touch at the edges).
- The sheets can extend beyond the boundaries of the rectangular roof.
A novice from your team has already placed two such sheets on the roof in such a way that the sheets \textbf{do not overlap} and each of them \textbf{partially covers the roof}.
Your task is to determine whether it is possible to completely tile the roof without removing either of the two already placed sheets.
|
Let $x_1 \neq x_2, y_1 \neq y_2$. We will show that the necessary and sufficient condition to tile the roof is $(x_2 - x_1)~\text{mod}~a = 0$ or $(y_2 - y_1)~\text{mod}~b = 0$. If this condition is satisfied, then we can tile the roof either "by columns" or "by rows". On the other hand, suppose there is some tiling, then some rectangle covers the cell $(x_1 - 1, y_1)$ and either it is in the same column as the first polygon, or it also covers the cell $(x_1 - 1, y_1 - 1)$. If the second case occurs, note that the rectangle covering the cell $(x_1, y_1 - 1)$ must be in the same row as it. Thus, any tiling is either by rows or by columns, and our condition must be satisfied. In the cases $x_1 = x_2$ or $y_1 = y_2$, note that the tiling must necessarily be by rows or columns, respectively. This follows from the reasoning described earlier. The final solution, taking into account the cases described, looks as follows: if $(x_1 \neq x_2~\text{and}~(x_2 - x_1)~\text{mod}~a = 0)$ $(y_1 \neq y_2~\text{and}~(y_2 - y_1)~\text{mod}~b = 0)$
|
[
"constructive algorithms",
"math"
] | 1,200
|
def solve():
w, h, a, b = map(int, input().split())
x1, y1, x2, y2 = map(int, input().split())
if x1 == x2:
if abs(y1 - y2) % b == 0:
return "Yes"
else:
return "No"
if y1 == y2:
if abs(x1 - x2) % a == 0:
return "Yes"
else:
return "No"
if (x1 - x2) % a == 0 or (y1 - y2) % b == 0:
return "Yes"
return "No"
t = int(input())
for _ in range(t):
print(solve())
|
2113
|
C
|
Smilo and Minecraft
|
The boy Smilo is playing Minecraft! To prepare for the battle with the dragon, he needs a lot of golden apples, and for that, he requires a lot of gold. Therefore, Smilo goes to the mine.
The mine is a rectangular grid of size $n \times m$, where each cell can be either gold ore, stone, or an empty cell. Smilo can blow up dynamite in any empty cell. When dynamite explodes in an empty cell with coordinates $(x, y)$, all cells within a square of side $2k + 1$ centered at cell $(x, y)$ become empty. If gold ore was located \textbf{strictly inside} this square (not on the boundary), it disappears. However, if the gold ore was on the boundary of this square, Smilo collects that gold.
Dynamite can only be detonated inside the mine, but the explosion square can extend beyond the mine's boundaries.
Determine the maximum amount of gold that Smilo can collect.
|
Let us consider the first explosion. Note that all the gold ore that was inside this square will be lost. However, observe that all the remaining gold ore in the mine will be obtained for sure. Indeed, we make explosions first in all the cells on the border of the square with side $3$ and center at the point of the first explosion, then on the border of the square with side $5$, $7$, and so on. Thus, the problem reduces to minimizing the loss of gold during the first explosion. To do this, simply iterate over the possible locations of the first explosion and keep track of the sum in the current explosion square, which can be done using prefix sums.
|
[
"brute force",
"constructive algorithms",
"greedy"
] | 1,700
|
#include <algorithm>
#include <iostream>
#include <vector>
#include <string>
#include <map>
#include <set>
#include <unordered_map>
#include <random>
#include <chrono>
#include <cassert>
#include <numeric>
#include <bitset>
#include <iomanip>
#include <queue>
#include <unordered_set>
#include <fstream>
#include <random>
using namespace std;
using ll = long long;
mt19937 gen(chrono::steady_clock::now().time_since_epoch().count());
const int MAXN = 500;
int sum[MAXN + 1][MAXN + 1];
int n, m, k;
int check(int i, int mx) {
return min(max(i, 0), mx);
}
int pref(int i, int j) {
return sum[check(i, n)][check(j, m)];
}
void solve() {
cin >> n >> m >> k; k--;
vector<string> mine(n);
int all_gold = 0;
for (int i = 0; i < n; i++) {
cin >> mine[i];
for (int j = 0; j < m; j++) {
all_gold += (mine[i][j] == 'g');
}
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
sum[i + 1][j + 1] = sum[i + 1][j] + sum[i][j + 1] - sum[i][j] + (mine[i][j] == 'g');
}
}
int ans = all_gold;
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
if (mine[i][j] == '.') {
int a = i - k, b = i + k + 1, c = j - k, d = j + k + 1;
ans = min(ans, pref(b, d) - pref(a, d) - pref(b, c) + pref(a, c));
}
}
}
cout << all_gold - ans << "\n";
}
int main() {
ios_base::sync_with_stdio(false); cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
}
|
2113
|
D
|
Cheater
|
You are playing a new card game in a casino with the following rules:
- The game uses a deck of $2n$ cards with different values.
- The deck is evenly split between the player and the dealer: each receives $n$ cards.
- Over $n$ rounds, the player and the dealer simultaneously play one top card from their hand. The cards are compared, and the point goes to the one whose card has a higher value. The winning card is removed from the game, while the losing card is returned to the hand \textbf{and placed on top of the other cards} in the hand of the player who played it.
Note that the game always lasts \textbf{exactly} $n$ rounds.
You have tracked the shuffling of the cards and know the order of the cards in the dealer's hand (from top to bottom). You want to maximize your score, so you can swap any two cards in your hand \textbf{no more than once} (to avoid raising suspicion).
Determine the maximum number of points you can achieve.
|
Let us consider all prefix minimums, and let their positions be $k_{1}, k_{2}, \ldots, k_{s}$. Note that if at some round the card $a_{k_{j}}$ wins ($a_{k_{j}} > b_{t}$), then the cards $a_{k_{j} + 1}, a_{k_{j} + 2}, \ldots, a_{k_{j + 1}}$ will also win, since all of them have a nominal value greater than $a_{k_{j}}$. Since the answer is a monotonic function, we use binary search on the answer. For a fixed prefix length of the player's cards, we replace the minimum on this prefix with the maximum on the remaining suffix and simulate the game.
|
[
"binary search",
"constructive algorithms",
"greedy",
"implementation"
] | 2,200
|
#include <bits/stdc++.h>
using namespace std;
#define sz(x) (int) ((x).size())
#define all(x) (x).begin(), (x).end()
#define rall(x) (x).rbegin(), (x).rend()
typedef long long ll;
typedef long double ld;
typedef pair<int, int> pii;
typedef pair<ll, ll> pll;
const char en = '\n';
const int INF = 1e9 + 7;
const ll INFLL = 1e18;
mt19937 rnd(chrono::high_resolution_clock::now().time_since_epoch().count());
#ifdef LOCAL
#include "debug.h"
#define numtest(x) cerr << "Test #" << (x) << ": " << endl;
#else
#define debug(...) 42
#define numtest(x) 42
#endif
int merge(const vector<int> &a, const vector<int> &b) {
int n = sz(a);
int res = 0;
for (int c = 0, i = 0, j = 0; c < n; ++c) {
if (a[i] > b[j]) {
++res;
++i;
} else if (a[i] < b[j]) {
++j;
} else {
assert(0);
}
}
return res;
}
void solve() {
int n;
cin >> n;
vector<int> a(n), b(n);
for (int i = 0; i < n; ++i) {
cin >> a[i];
}
for (int i = 0; i < n; ++i) {
cin >> b[i];
}
vector<int> pref_min(n), suf_max(n);
pref_min[0] = 0;
for (int i = 1; i < n; ++i) {
pref_min[i] = pref_min[i - 1];
if (a[i] < a[pref_min[i - 1]]) {
pref_min[i] = i;
}
}
suf_max[n - 1] = n - 1;
for (int i = n - 2; i >= 0; --i) {
suf_max[i] = suf_max[i + 1];
if (a[i] > a[suf_max[i + 1]]) {
suf_max[i] = i;
}
}
int cur = merge(a, b);
int l = cur, r = n;
while (r - l > 1) {
int m = l + (r - l) / 2;
swap(a[pref_min[m - 1]], a[suf_max[m]]);
if (merge(a, b) >= m) {
l = m;
} else {
r = m;
}
swap(a[pref_min[m - 1]], a[suf_max[m]]);
}
cout << l << en;
}
int32_t main() {
int tests = 1;
#ifdef LOCAL
freopen("input.txt", "r", stdin);
tests = 1;
#else
ios_base::sync_with_stdio(0);
cin.tie(0);
#endif
cin >> tests;
for (int testcase = 1; testcase <= tests; ++testcase) {
solve();
}
return 0;
}
|
2113
|
E
|
From Kazan with Love
|
Marat is a native of Kazan. Kazan can be represented as an undirected tree consisting of $n$ vertices. In his youth, Marat often got into street fights, and now he has $m$ enemies, numbered from $1$ to $m$, living in Kazan along with him.
Every day, all the people living in the city go to work. Marat knows that the $i$-th of his enemies lives at vertex $a_i$ and works at vertex $b_i$. He himself lives at vertex $x$ and works at vertex $y$. It is guaranteed that $a_i \ne x$.
All enemies go to work via the shortest path and leave their homes at time $1$. That is, if we represent the shortest path between vertices $a_i$ and $b_i$ as $c_1, c_2, c_3, \ldots, c_k$ (where $c_1 = a_i$ and $c_k = b_i$), then at the moment $p$ ($1 \le p \le k$), the enemy numbered $i$ will be at vertex $c_p$.
Marat really does not want to meet any of his enemies at the same vertex at the same time, as this would create an awkward situation, but they \textbf{can meet on an edge}. Marat also leaves his home at time $1$, and at each subsequent moment in time, he can either move to an adjacent vertex or stay at his current one.
Note that Marat can only meet the $i$-th enemy at the moments $2, 3, \ldots, k$ (where $c_1, c_2, \ldots, c_k$ is the shortest path between vertices $a_i$ and $b_i$). In other words, starting from the moment after the enemy reaches work, Marat \textbf{can no longer meet him}.
Help Marat find the earliest moment in time when he can reach work without encountering any enemies along the way, or determine that it is impossible.
|
Lemma: If the answer $\neq$ -1, then it does not exceed $2 \times N + 1$. Proof: Let the optimal path be $s_1 = x, s_2, s_3, \cdots, s_k = y$. Then, at time $N+1$, all the enemies have already reached work, and we can go from $s_{N+1}$ to $s_{k}$ in $\leq N$ moves $\Rightarrow k \leq 2 \times N + 1$. Let us call a moment $t$ for a vertex $v$ bad if at this moment there is an enemy at this vertex. Obviously, the total number of bad moments over all vertices is $\leq N \times M$. Then the problem can be solved in $O(N^2)$: Let us define $dp_{v, t}$ - whether Marat can be at vertex $v$ at time $t$. Initially, $dp_{x, 1} = 1$. For each $t$, we make transitions by iterating over vertices, and if $dp_{v, t} = 1$, then we can set $dp_{u, t+1} = 1$ for all neighbors $u$ of $v$, including $v$ itself. And if at time $t$ there is an enemy at vertex $v$, then we must set $dp_{v, t} = 0$. The answer is the minimal $t$ such that $dp_{y, t} = 1$. Solution in $O(N \times M)$: Let us maintain a set $S_t$ of vertices where Marat can potentially be at time $t$, i.e., for which $dp_{v, t} = 1$. Clearly, $S_{1} = \{x\}$. How do $S_t$ and $S_{t+1}$ differ? If at time $t+1$ there is an enemy at vertex $v$, then $v$ definitely does not belong to $S_{t+1}$ Otherwise, if $v \in S_t$ then $v \in S_{t+1}$ Also, if any neighbor of vertex $v$ belongs to $S_t$, then $v \in S_{t+1}$ The set $S$ is easy to update when moving from $t$ to $t+1$. We are interested in two types of vertices: 1. Those that have an enemy at time $t+1$ 2. Those that are not in $S_t$, but have a neighbor in $S_t$ Vertices of type 1 are easy to process. For example, we can ignore their existence, recalculate $S_{t+1}$, and remove all type 1 vertices, having previously recorded them in a separate list. In total, over all $t$, the number of type 1 vertices is at most $N \times M$, since their number does not exceed the sum of the lengths of all enemy paths. Vertices of type 2 are processed as follows: when some vertex enters the set (and was not in it in the previous moment), we mark all its neighbors in the tree as "candidates" for addition. Also, we add to the candidates all vertices that were of type 1 in the previous moment - these are vertices that could be in the candidate list now, but were removed from it in the previous year. Then we try to add all candidates to $S$, and clear the list. In total, there are $O(N \times M)$ candidates, since each vertex can "enter" the set no more than $M+1$ times. (To "enter", it must have "left" before. This could have happened no more than $M$ times.) Therefore, its list of neighbors is added to the candidates no more than $M+1$ times. The sum of the lengths of the neighbor lists $=$ the sum of the degrees of the vertices $=$ $2 \times N = O(N)$ Total: $O(N \times M)$.
|
[
"dfs and similar",
"graphs",
"implementation",
"trees"
] | 2,800
|
//#pragma GCC optimize("Ofast")
#include "bits/stdc++.h"
#define rep(i, n) for (int i = 0; i < (n); ++i)
#define rep1(i, n) for (int i = 1; i < (n); ++i)
#define rep1n(i, n) for (int i = 1; i <= (n); ++i)
#define repr(i, n) for (int i = (n) - 1; i >= 0; --i)
//#define pb push_back
#define eb emplace_back
#define all(a) (a).begin(), (a).end()
#define rall(a) (a).rbegin(), (a).rend()
#define each(x, a) for (auto &x : a)
#define ar array
#define vec vector
#define range(i, n) rep(i, n)
using namespace std;
using ll = long long;
using ull = unsigned long long;
using ld = double;
using str = string;
using pi = pair<int, int>;
using pl = pair<ll, ll>;
using vi = vector<int>;
using vl = vector<ll>;
using vpi = vector<pair<int, int>>;
using vvi = vector<vi>;
int Bit(int mask, int b) { return (mask >> b) & 1; }
template<class T>
bool ckmin(T &a, const T &b) {
if (b < a) {
a = b;
return true;
}
return false;
}
template<class T>
bool ckmax(T &a, const T &b) {
if (b > a) {
a = b;
return true;
}
return false;
}
// [l, r)
template<typename T, typename F>
T FindFirstTrue(T l, T r, const F &predicat) {
--l;
while (r - l > 1) {
T mid = l + (r - l) / 2;
if (predicat(mid)) {
r = mid;
} else {
l = mid;
}
}
return r;
}
template<typename T, typename F>
T FindLastFalse(T l, T r, const F &predicat) {
return FindFirstTrue(l, r, predicat) - 1;
}
const int INFi = 2e9;
const ll INF = 2e18;
void solve() {
int n, m, x, y; cin >> n >> m >> x >> y;
x--;
y--;
vvi g(n);
rep(_, n - 1) {
int u, v; cin >> u >> v;
u--;
v--;
g[u].push_back(v);
g[v].push_back(u);
}
vector<vi> block;
vi path;
auto dfs = [&] (auto &&self, int v, int p, int t) -> bool {
path.push_back(v);
if (v == t) return true;
for(auto &u : g[v]) {
if (u == p) continue;
if (self(self, u, v, t)) return true;
}
path.pop_back();
return false;
};
rep(i, m) {
int a, b; cin >> a >> b;
a--;
b--;
dfs(dfs, a, -1, b);
assert(!path.empty());
if (block.size() < path.size()) block.resize(path.size());
rep(j, path.size()) block[j].push_back(path[j]);
path.clear();
}
vector<bool> ok(n, false);
vi q;
q.push_back(x);
vector<bool> cur(n, false);
vi was(n, -1);
for(int t = 0;;++t) {
if (t < block.size()) for(auto &u : block[t]) cur[u] = true;
vi nxt;
for(auto &v : q) {
if (ok[v] || cur[v] || was[v] == t) continue;
was[v] = t;
bool nei = 0;
for(auto &u : g[v]) nei |= ok[u];
if (t && !nei) continue;
nxt.push_back(v);
}
q.clear();
if (t < block.size()) {
for(auto &u : block[t]) {
cur[u] = false;
if (ok[u]) {
ok[u] = false;
}
q.push_back(u);
}
}
for(auto &v : nxt) {
assert(!ok[v]);
ok[v] = true;
for(auto &u : g[v]) {
if (!ok[u]) {
q.push_back(u);
}
}
}
if (ok[y]) {
cout << t + 1 << '\n';
return;
}
if (t > (int)block.size() && q.empty()) {
cout << "-1\n";
return;
}
}
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(0);
cout << setprecision(12) << fixed;
int t = 1;
cin >> t;
rep(_, t) {
solve();
}
return 0;
}
|
2113
|
F
|
Two Arrays
|
You are given two arrays $a$ and $b$ of length $n$. You can perform the following operation an unlimited number of times:
- Choose an integer $i$ from $1$ to $n$ and swap $a_i$ and $b_i$.
Let $f(c)$ be the number of distinct numbers in array $c$. Find the maximum value of $f(a) + f(b)$. Also, output the arrays $a$ and $b$ after performing all operations.
|
This problem has many different solutions. Feel free to share your ideas in the comments. Let us consider an arbitrary number $x$: If it does not appear in arrays $a$ and $b$, then it does not affect the answer in any way. If it appears exactly once, then no matter how we perform the operations, it always contributes $1$ to the answer. If it appears at least twice, then it contributes either $1$ or $2$ to the answer. If all occurrences are in one of the two arrays, the contribution is $1$, otherwise it is $2$. Thus, if $\mathrm{cnt}_x$ is the number of occurrences of $x$ in both arrays, then its contribution does not exceed $\min(\mathrm{cnt}_x, 2)$. Therefore, the answer does not exceed: $\sum \min(\mathrm{cnt}_x, 2)$ Solution 1 Let us build a graph on the values in the arrays. For each $i$, add an undirected edge between vertices $a_i$ and $b_i$. Now, we need to orient the edges of the graph in such a way that from each vertex of degree at least $2$ there is at least one outgoing edge, and into each vertex of degree at least $2$ there is at least one incoming edge. If the $i$-th edge is oriented from vertex $x$ to vertex $y$, then $(a_i, b_i) = (x, y)$. In each connected component, build a DFS tree rooted at an arbitrary vertex. Orient all edges of this tree from top to bottom, and orient the back edges from bottom to top. It is easy to see that in this construction, the orientation condition is satisfied for all vertices except possibly the root. For the root, the condition is not satisfied only if its degree is at least $2$ and there is no back edge entering it. In this case, the root has at least two subtrees, so it is enough to choose any of them and reverse all the edges in it. This gives a construction that achieves the bound, and can be built in $\mathcal{O}(n)$. Solution 2 Consider all occurrences of the number $x$ in both arrays. Split them into pairs. If there is an odd number of occurrences, leave one occurrence unpaired. For each pair, replace the occurrence of $x$ in it with some new number, different from all existing ones. Thus, we obtain two arrays where each number appears at most twice. Now, build a graph similar to Solution 1. Since the degree of each vertex does not exceed $2$, it breaks up into cycles and paths. Each path can be oriented from start to end. The edges in a cycle can be oriented in one direction so that the cycle becomes directed. This gives a construction in $\mathcal{O}(n)$.
|
[
"constructive algorithms",
"dfs and similar",
"graphs",
"math"
] | 2,500
|
#include "bits/stdc++.h"
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
vector<int> b(n);
for (int i = 0; i < n; i++) {
cin >> b[i];
}
vector<int> want(n);
auto make = [&] (int i, int x, int y) {
if (!want[i]) {
if (a[i] != x) {
swap(a[i], b[i]);
}
want[i] = 1;
}
};
const int N = 2 * n + 22;
vector<vector<pair<int, int>>> g(N);
for (int i = 0; i < n; i++) {
g[a[i]].push_back({b[i], i});
g[b[i]].push_back({a[i], i});
}
vector<int> used(N);
auto dfs = [&] (auto&& dfs, int v, int h) -> void {
used[v] = h;
for (auto& [u, i] : g[v]) {
if (used[u] <= 0) {
make(i, v, u);
dfs(dfs, u, h + 1);
} else if (used[u] < used[v]) {
make(i, v, u);
}
}
};
for (int i = 0; i < N; i++) {
if (used[i] == 0 && int(g[i].size()) == 1) {
dfs(dfs, i, 1);
}
}
int rt;
auto find = [&] (auto&& find, int v, int pr) -> void {
used[v] = -1;
for (auto& [u, i] : g[v]) {
if (used[u] == 0) {
find(find, u, i);
} else if (i != pr) {
rt = u;
}
}
};
for (int i = 0; i < N; i++) {
if (used[i] == 0 && !g[i].empty()) {
rt = -1;
find(find, i, -1);
dfs(dfs, rt, 1);
}
}
cout << set<int>(a.begin(), a.end()).size() + set<int>(b.begin(), b.end()).size() << '\n';
for (int i = 0; i < n; i++) {
cout << a[i] << " ";
}
cout << '\n';
for (int i = 0; i < n; i++) {
cout << b[i] << " ";
}
cout << '\n';
}
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t;
cin >> t;
while (t--) {
solve();
}
}
|
2114
|
A
|
Square Year
|
One can notice the following remarkable mathematical fact: the number $2025$ can be represented as $(20+25)^2$.
You are given a year represented by a string $s$, consisting of exactly $4$ characters. Thus, leading zeros are allowed in the year representation. For example, "0001", "0185", "1375" are valid year representations. You need to express it in the form $(a + b)^2$, where $a$ and $b$ are \textbf{non-negative integers}, or determine that it is impossible.
For example, if $s$ = "0001", you can choose $a = 0$, $b = 1$, and write the year as $(0 + 1)^2 = 1$.
|
To solve this problem it was enough to check whether number $s$ is the square of some integer $x$. If yes, then the answer to the set of input data could be a pair of numbers $a=0$, $b=x$, or any other existing partition of number $x$ into a pair of non-negative summands. Otherwise, the answer to the problem is - $-1$.
|
[
"binary search",
"brute force",
"math"
] | 800
|
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
int sq = ceil(sqrt(n));
if (sq * sq == n) {
cout << 0 << ' ' << sq << "\n";
} else {
cout << "-1\n";
}
}
int main() {
int t;
cin >> t;
while (t--) solve();
}
|
2114
|
B
|
Not Quite a Palindromic String
|
Vlad found a binary string$^{\text{∗}}$ $s$ of even length $n$. He considers a pair of indices ($i, n - i + 1$), where $1 \le i < n - i + 1$, to be good if $s_i = s_{n - i + 1}$ holds true.
For example, in the string '010001' there is only $1$ good pair, since $s_1 \ne s_6$, $s_2 \ne s_5$, and $s_3=s_4$. In the string '0101' there are no good pairs.
Vlad loves palindromes, but not too much, so he wants to rearrange some characters of the string so that there are exactly $k$ good pairs of indices.
Determine whether it is possible to rearrange the characters in the given string so that there are exactly $k$ good pairs of indices ($i, n - i + 1$).
\begin{footnotesize}
$^{\text{∗}}$A string $s$ is called binary if it consists only of the characters '0' and '1'
\end{footnotesize}
|
To begin, let's solve a simpler problem: we will find the minimum and maximum possible number of good pairs. To achieve the minimum number of pairs, we will place all zeros at the beginning of the string and all ones at the end. Then, if the number of zeros is $c_0$ and the number of ones is $c_1$, the number of good pairs will be $\max(c_0, c_1) - \frac{n}{2}$ (they will be in the middle of the string). The maximum number of good pairs is equal to $\lfloor\frac{c_0}{2}\rfloor + \lfloor\frac{c_1}{2}\rfloor$. For $k$ to be achievable, it must obviously be no less than the minimum and no more than the maximum possible number of pairs; this condition is necessary but not sufficient. For example, in a string of $2$ zeros and $2$ ones, you can obtain $0$ or $2$ good pairs, but you cannot obtain one. This happens because any swap of symbols changes the number of good pairs either by $0$ or by $2$. Let's demonstrate the results of swaps between the first elements of the pairs: 00 and 10 $\rightarrow$ 10 and 00: the number of good pairs did not change; 00 and 11 $\rightarrow$ 10 and 01: the number of good pairs changed by $2$; 01 and 10 $\rightarrow$ 11 and 00: the number of good pairs changed by $2$; 01 and 11 $\rightarrow$ 11 and 01: the number of good pairs did not change; other pairs are either symmetrical to those shown or do not change the string.
|
[
"greedy",
"math"
] | 900
|
#include <bits/stdc++.h>
#define int long long
#define pb emplace_back
#define mp make_pair
#define x first
#define y second
#define all(a) a.begin(), a.end()
#define rall(a) a.rbegin(), a.rend()
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(time(nullptr));
const int inf = 1e9;
const int M = 1e9 + 7;
const ld pi = atan2(0, -1);
const ld eps = 1e-6;
void solve(int tc){
int n, k;
cin >> n >> k;
string s;
cin >> s;
vector<int> cnt(2);
for(char c: s){
cnt[c - '0']++;
}
int mn = max(cnt[0], cnt[1]) - n / 2;
int mx = cnt[0] / 2 + cnt[1] / 2;
if(k >= mn && (k - mn) % 2 == 0 && k <= mx) cout << "YES";
else cout << "NO";
}
bool multi = true;
signed main() {
int t = 1;
if (multi)cin >> t;
for (int i = 1; i <= t; ++i) {
solve(i);
cout << "\n";
}
return 0;
}
|
2114
|
C
|
Need More Arrays
|
Given an array $a$ and $n$ integers. It is sorted in non-decreasing order, that is, $a_i \le a_{i + 1}$ for all $1 \le i < n$.
You can remove any number of elements from the array (including the option of not removing any at all) without changing the order of the remaining elements. After the removals, the following will occur:
- $a_1$ is written to a new array;
- if $a_1 + 1 < a_2$, then $a_2$ is written to a new array; otherwise, $a_2$ is written to the same array as $a_1$;
- if $a_2 + 1 < a_3$, then $a_3$ is written to a new array; otherwise, $a_3$ is written to the same array as $a_2$;
- $\cdots$
For example, if $a=[1, 2, 4, 6]$, then:
- $a_1 = 1$ is written to the new array, resulting in arrays: $[1]$;
- $a_1 + 1 = 2$, so $a_2 = 2$ is added to the existing array, resulting in arrays: $[1, 2]$;
- $a_2 + 1 = 3$, so $a_3 = 4$ is written to a new array, resulting in arrays: $[1, 2]$ and $[4]$;
- $a_3 + 1 = 5$, so $a_4 = 6$ is written to a new array, resulting in arrays: $[1, 2]$, $[4]$, and $[6]$.
Your task is to remove elements in such a way that the described algorithm creates as many arrays as possible. If you remove all elements from the array, no new arrays will be created.
|
We will be selecting elements from left to right, skipping some of them. If a new element is supposed to go into the same array as the last one we took, then skipping it will not make things worse. This is true because if we took an element equal to $x$, then skipping an element equal to $x + 1$ will allow an element equal to $x + 2$ to go into a new array instead of the previous elements. Similarly, it is always beneficial to take the first element of the array first, as the second element may be larger and the answer may worsen.
|
[
"dp",
"greedy"
] | 1,000
|
#include <bits/stdc++.h>
using namespace std;
void solve(int tc){
int n;
cin >> n;
int last = -1, ans = 0;
for(int i = 0; i < n; ++i){
int a;
cin >> a;
if(a - last > 1){
ans++;
last = a;
}
}
cout << ans;
}
bool multi = true;
signed main() {
int t = 1;
if (multi)cin >> t;
for (int i = 1; i <= t; ++i) {
solve(i);
cout << "\n";
}
return 0;
}
|
2114
|
D
|
Come a Little Closer
|
The game field is a matrix of size $10^9 \times 10^9$, with a cell at the intersection of the $a$-th row and the $b$-th column denoted as ($a, b$).
There are $n$ monsters on the game field, with the $i$-th monster located in the cell ($x_i, y_i$), while the other cells are empty. No more than one monster can occupy a single cell.
You can move one monster to any cell on the field that is not occupied by another monster \textbf{at most once} .
After that, you must select \textbf{one} rectangle on the field; all monsters within the selected rectangle will be destroyed. You must pay $1$ coin for each cell in the selected rectangle.
Your task is to find the minimum number of coins required to destroy all the monsters.
|
The minimum rectangle that we can choose must have sides of length $\max_i(x_i) - \min_i(x_i) + 1$ and $\max_i(y_i) - \min_i(y_i) + 1$, meaning that the maximum and minimum values along both axes are important. Let's consider the movement of a certain monster; we will not choose a new position but will simply examine the rectangle needed to cover the remaining ones. To find the new maximums and minimums, we can use a multiset or a similar ordered data structure. The code below will utilize a simple fact: if we remove a point with the maximum coordinate along some axis, the new maximum will become the second maximum (similarly for minimums). Thus, for each axis, we can store two minimums and two maximums. Now that we know how to find the minimum rectangle for all monsters except the current one, we just need to determine if it can fit inside this rectangle. If the area of the rectangle is equal to $n - 1$, then all the spaces inside it are already occupied, and one of the sides must be increased; otherwise, the monster can be placed inside it.
|
[
"brute force",
"greedy",
"implementation",
"math"
] | 1,400
|
#include <bits/stdc++.h>
#define int long long
#define pb emplace_back
#define mp make_pair
#define x first
#define y second
#define all(a) a.begin(), a.end()
#define rall(a) a.rbegin(), a.rend()
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(time(nullptr));
const int inf = 1e9;
const int M = 1e9 + 7;
const ld pi = atan2(0, -1);
const ld eps = 1e-6;
struct min_max{
int mx1, mx2, mn1, mn2;
void fix_mx(){
if(mx1 < mx2){
swap(mx1, mx2);
}
}
void fix_mn(){
if(mn1 > mn2){
swap(mn1, mn2);
}
}
min_max(int a, int b){
mx1 = mn1 = a;
mx2 = mn2 = b;
fix_mx();
fix_mn();
}
void add(int x){
mx2 = max(mx2, x);
mn2 = min(mn2, x);
fix_mx();
fix_mn();
}
int get_seg(int x){
pair<int, int> res = {mn1, mx1};
if(x == mn1) res.x = mn2;
if(x == mx1) res.y = mx2;
return res.y - res.x + 1;
}
};
void solve(int tc){
int n;
cin >> n;
vector<pair<int, int>> coord(n);
for(auto &e: coord){
cin >> e.x >> e.y;
}
if(n <= 2){
cout << n;
return;
}
min_max xc(coord[0].x, coord[1].x), yc(coord[0].y, coord[1].y);
for(int i = 2; i < n; ++i){
xc.add(coord[i].x);
yc.add(coord[i].y);
}
int ans = xc.get_seg(-1) * yc.get_seg(-1);
for(int i = 0; i < n; ++i){
int x = xc.get_seg(coord[i].x);
int y = yc.get_seg(coord[i].y);
if(x * y == n - 1){
ans = min(ans, min((x + 1) * y, x * (y + 1)));
}
else{
ans = min(ans, x * y);
}
}
cout << ans;
}
bool multi = true;
signed main() {
int t = 1;
if (multi)cin >> t;
for (int i = 1; i <= t; ++i) {
solve(i);
cout << "\n";
}
return 0;
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.