File size: 1,872 Bytes
f7ba5f2 |
1 2 3 4 5 6 7 8 9 10 11 12 |
For each vocabulary word \(V_i\), we can consider changing each letter to every other letter. This gives us \(3*L\) possibilities for words differing by exactly one letter. We can store a hash of each of these "\(1\)-mistaken" strings in a map.
To avoid an \(\mathcal{O}(L)\) time factor for hashing each possibility, we can use a rolling hash. For example, to hash an array \(A_{1..L}\), we can define \(h(A_1, ..., A_L) := (A_1 * p^1 + ... + A_L* p^L) \text{ mod } P\), where \(p\) and \(P\) are prime numbers. We can then precompute modded prefix sums in \(\mathcal{O}(L\)\). Given any \(h(A_1, ..., A_L)\), we will be able to to replace a value \(A_i\) with \(A_i'\) without rehashing, by computing \((h(A_1, ..., A_L) - h(A_1, ..., A_{i}) + h(A_1, ..., A_{i - 1}) + A_i' * p^i) \text{ mod } P\) in \(O(1)\) time (where the second and third terms are prefix sums).
For each hash \(h\), we can compute two tables of counts:
* \(\text{H}[h][j]\) storing the number of \(1\)-mistaken strings with hash \(h\), mistaken at index \(j\)
* \(\text{Hsum}[h] = \sum_j H[h][j]\) storing the total number of \(1\)-mistaken strings with hash \(h\)
For each query \(W_i\), we can first compute \(h(W_i\)) in \(\mathcal{O}(L)\). Then, we can go through all \(3*L\) possibilities of strings derived from swapping \(W_i\) at each index \(j_1\) (deriving a new hash \(h'\) from \(h(W_i)\) in \(\mathcal{O}(1)\) time as described above). We can see if this swap matches any previously \(1\)-mistaken strings (mistake at some other index \(j_2\)). Each such replacement will contribute \((\text{Hsum}[h'] - \text{H}[h'][j_1])/2\) to the answer. That is, we need to exclude hashes where \(j_1\) equals a previously-mistaken index, and divide by \(2\) to avoid double counting \((j_1, j_2)\) and \((j_2, j_1)\).
The overall time and space complexity is \(\mathcal{O}{(3*L*(N + Q))}\).
|