# Dynamic programming - divide and conquer optimisation

During the recent HackerRank’s competition *101 Hack 53* I stumbled upon a very nice »problem« which involves
solving a DP recurrence, at which point I got stuck and couldn’t do it fast enough. It turns out that the
recurrence is solvable using the *Divide and conquer optimisation technique*, which I now describe in detail. (»TLDR«)

For illustrative purposes, I will elaborate throughout on the competition **problem** (optimal one-dimensional clustering):

Consider a given set of points . We would like to find a partition of into at most clusters/intervals , such that the sum of the within-cluster distances is minimised, where is the

meanof the cluster to which belongs.

The divide and conquer optimisation is applicable to dynamic programming problems of the form:

where the cost function satisfies a **certain property** which we will describe later (Or equally well for the case of maximisation). Let’s straight away
verify that our problem is indeed of this form. First of all, consider the points to be in sorted order. We observe that there must an optimal clustering which will consist of consecutive intervals of these points (otherwise, we could swap two points between two clusters and improve the error term ). Let denote the minimal possible when partitioning into exactly clusters. We can then recurse on the size of the (last) cluster, to which the point belongs to. This gives .

Before we dive into optimising the computation of , let’s quickly give a fast way of computing the cost term . We can rewrite:

After precomputing the consecutive sums of s and their squares (which takes linear time), we can obtain any cost(k + 1, l) in constant time.

It is not hard to solve the DP problem (1) in time (where I is the range of i and J range of j); it suffices to simply loop over i, j, k. This gives us a first, naïve solution to our clustering problem.

Let’s see what the **property** for divide and conquer applicability is. Define be the smallest index in the DP recursion (1) which minimises . We can use the divide and conquer optimisation if, for a fixed , the sequence is *monotonic* (that is, non-increasing or non-decreasing). The idea behind optimisation is that this will narrow the range which we need to check for computing .

We now prove that that our clustering problem satisfies this property. In particular, that \[ \alpha(i, 0) \leq \alpha(i, 1) \leq \dots \leq \alpha(i, n) \]

*Proof*. As we will see, a *sufficient condition* for the above monotonicity is that the cost function satisfies the so-called **Quadrangle inequality**: For all the following holds:

\[cost(a, c) + cost(b, d) \leq cost(a, d) + cost(b, c) \]

In our case, this is equivalent to (Assuming that ; in the case of some inequalities being equalities, the statement of quadrangle inequality holds more obviously):

where we made the appropriate substitutions for the sums, as well as . Moreover, the fact that the sequence is ordered constraints us with (these are just means of the three parts). Now, a black magic insight allows us to make two further assumptions. First, notice that in our case, the inequality was invariant
under shift of (we may add the same fixed number to each term without changing the inequality) and we thus may assume that the sum . This immediately entails and . If we had , the inequality (2) would hold trivially and hence we consider only .

Moreover, notice that the inequality (2) is *homogeneous* (it holds for if and only if it holds for multiples ). Thus we may further assume that . So putting our assumptions together (), the inequality (2) is equivalent to:

The given inequalities on the right immediately give the inequality on the left. Following the equivalences upwards, the quadrangle inequality is proven.

Now only one question remains - why is the Quadrangle inequality on the cost function sufficient to assert ?

*Proof*. We need to show that for all we have . Suppose it was otherwise. Then there would be two indices which minimise and respectively, such that . But this directly gives us two relations and , which we can sum to obtain

\[cost(b, j + 1) + cost(a, j) < cost(a, j + 1) + cost(b, j) \]

This directly contradicts the Quadrangle inequality which states .

Anyhow, how do we use the fact that to divide and conquer? In the same way as in the binary search! That is, we will check the value in the middle and then conquer on left and right. Suppose that we first compute the value and it is minimised at index . Then for anything on the left of j, , to compute we only need to check indices smaller or equal to . Similarly, for anything on the right of j, we only need to check indices greater or equal to .
Notice that we can indeed *compute the values in arbitrary order*, as they only depend on the previous row . This reduces the runtime of our program from to .

Revisiting our clustering problem, we can write a faster, using this.

Full source code that passes the problem at HackerRank (at the very least using PyPy 2) is below:

optimal_bus_stops.py

**__**_____

# TLDR

If your DP relation of the form satisfies the Quadrangle inequality \[ \text{For all } a \leq b \leq c \leq d .\ \ \ cost(a, c) + cost(b, d) \leq cost(a, d) + cost(b, c) \] you can compute the values of the -th row in a divide and conquer order (start in the middle, conquer on the left and right) to obtain a solution.