Despite its simplicity, the problem we have been discussing in the last two posts (see the first and second post) is a nice source of examples and insights about conic programming. The formulation we obtained is:

\begin{equation}

\begin{aligned}

\min r_0&\\

s.t.&\\

&\quad ( r_0\mathbf{e}, P-\mathbf{e}p_0^T) \in \Pi_{i=1}^k Q^{(n+1)}.

\end{aligned}

\end{equation}

Recall that $p_0,p_i$ are row vectors and $e$ is a vector with all ones. We might wonder whether the dual problem of (1) is harder or easier to solve. Using Conic Programming duality this can be done pretty easily.

So let's use the duality theory of Conic Programming to derive the dual problem of (1). First, to derive the dual we restate (1) in a slightly different way:

\begin{equation}

\begin{aligned}

\min r_0&\\

s.t.&\\

&\quad ( r_0, p_0^T - p_i) \in Q^{(n+1)} & i=1,\ldots,k.

\end{aligned}

\end{equation}

Following [1], this is our primal formulation and the construction of the dual problem should be quite straightforward! Consider the standard primal form (see pp. 69 in [1]):

\begin{equation}

\begin{aligned}

\min c^T x&\\

s.t.&\\

&\quad A_i x_i -b_i \in K_i & i=1,\ldots,k,

\end{aligned}

\end{equation}

where $K_i$'s are cones and $x_i=(r_o , p_0)^T$: problem (2) follows by setting $ c=( 1 , 0_{n})^T$ and

\[

A_i= I_{n+1},\quad b_i= \begin{pmatrix} 0 \\ p_i^T \end{pmatrix},\quad K_i=Q^{(n+1)} , \quad i=1,\ldots,k..

\]

To formulate the dual, for each cone $i$ we introduce a vector $y_i \in R^{n+1}$ of dual variables such that $y_i$ must belong to the self-dual cone of $Q^{(n+1)}$, i.e. $Q^{(n+1)}$ itself. The dual problem takes the form

\begin{equation}

\begin{aligned}

\max & \sum_{i=1}^k b_i^T y_i\\

s.t.&\\

&\sum_{i=1}^k y_i = c\\

&y_i\in Q^{(n+1)} & i=1,\ldots,k.

\end{aligned}

\end{equation}

which is equivalent to

\begin{equation}

\begin{aligned}

\max & \mathrm{Tr} \sum_{i=1}^k B Y^T \\

s.t.&\\

& e_k^T Y = c\\

&Y \in \Pi_{i=1}^k Q^{(n+1)}

\end{aligned}

\end{equation}

where \[ Y= \begin{pmatrix} y_1^T \\ \vdots \\ y_k^T \end{pmatrix}, B= \begin{pmatrix} p_1^T \\ \vdots \\ p_k^T \end{pmatrix}, e_k= \begin{pmatrix} 1\\ \vdots \\ 1\end{pmatrix}\in R^k.\]

Let's implement the dual formulation (5) using the MOSEK Fusion API. The code looks like

The code is very simple, compact and self-commenting. Note how we also exploit the fact that the first column of $B$ is composed by zeros. What about performance?

**The dual beats the primal in every sense!**Look at the plot below (tests have been done on my desktop machine Pentium(R) Dual-Core CPU E5700 @ 3.00GHz under Linux) where we report the running time needed to build the optimization model and to solve it.

The dual is much faster both in terms of time spent in building the model, which is indeed much simpler, and also in terms of solver execution time! Both model building scale almost linearly, but also the solution time does. If the former is expected, the latter might come as a surprise! Note however, that model building time matters when tackling polynomially solvable problems....

Final take-away message:

- Solving the dual instead of the primal can be much more efficient!

- The conic formulation helps to derive the dual problem!

[1] Ben-Tal, A., & Nemirovski, A. (2001). Lectures on modern convex optimization: analysis, algorithms, and engineering applications (Vol. 2). Siam.