At a glance, the problem seems to be asking us for the (K) longest paths, which is NP-hard on general graphs. It can actually be observed that the input must form a DAG, for which there are polynomial time solutions (e.g. by negating the edge weights and solving for the shortest path). Unfortunately, such solutions have running time proportional to the number of edges, of which there can be (\mathcal{O}(N^2)) here (for example, if (N/2) clients can all sell to the other (N/2)).
Instead, we'll try a completely different approach. First consider the problem of finding just the top (K = 1) path profit. Let (P_i) be the maximum profit that can arise from a path starting at client (i). Note that (P_i = \max(P_j + X_j - Y_i)) over all other clients (j) such that (B_i = A_j) and (X_j > Y_i). The critical observation is that each (P_i) only depends on days (B_i) and later, and only depends on buyers with prices greater than (Y_i).
We'll iterate over the days in reverse. For each day, we'll iterate over all its buyers/sellers in non-increasing order of price, while maintaining the largest value (v = P_j + X_j) across buyers seen so far. For each buyer, we'll update (v). For each seller (i), we'll set (P_i = \max(0, v - Y_i)). Once all the days and buy/sell events per day have been sorted, this processing takes (\mathcal{O}(N)) time. Finally, the top (1) path profit is the (\max(P_i)) across all possible starting clients (i).
Now generalizing to the top (K > 1) paths, let each (P_i = [P_{i,1}, P_{i,2}, \ldots, ]) be a list of the top (K) largest profits from distinct paths starting at seller (i). Then:
[ P_i = \text{top}K\left( ,\bigcup_{j \text{ s.t. } B_i=A_j,,X_j > Y_i} \left[ P_{j,1} + X_j - Y_i, P_{j,2} + X_j - Y_i, \ldots \right] \right) ]
where (\bigcup) is the union of lists (multisets) with duplicates allowed. Another way to interpret this: if there are several top paths starting at buyer (j), and a sale (i \to j) is possible, then we can prepend each path by (i) to get a new distinct path starting with that sale.
Again we can process all events sorted by descending days and price. For each day, we start a new running list (V) of the top (K) values of (P_{j,k} + X_j) across all buyers (j) and their path lists so far. For each seller (i), we merge (P_i) with the top (K) values of ([v - Y_i \text{ s.t. } v \in V]). Finally, the answer is the top (K) profits across the union of all (P_i) lists.
At each buy/sell event, we'll need to merge two lists of length up to (K). As long as we make sure that lists are always sorted, merging be done in (\mathcal{O}(K)) time and space. As there are (2N) events which first need to be sorted, the overall running time is (\mathcal{O}(N \log(N) + NK)). Merging can also be handled with data structures like heaps and binary search trees, leading to a higher but still acceptable overall running time of (\mathcal{O}(N \log(N) + NK \log (K))).