## Monday, March 26, 2018

### Progress on the pooling problem and new reformulation techniques

On the 8th of March 2018, an interesting paper on the pooling problem landed on arxiv. The authors take the pq-formulation, known to have the strongest continuous relaxation, and strengthen it further it by adding convex hull-describing valid inequalities for subproblems -- one for each (product, attibute, pool)-combination. Two of the inequalities are linear, one is conic quadratic, and the final is general convex. We took up the challenge and also cast the latter on conic quadratic form. Here is what we learned.

The inequality is this:
$$\label{eq:cut} \overline{\beta}(\overline{\gamma} x - u) + h(y, v) \leq \overline{\beta}(\overline{\gamma} - t),$$

valid on $y > 0$ when
$$h(y,v) := A y + B g(y,v),\quad g(y,v) := \frac{yv}{y+v},\quad v \geq 0,$$

where $A = \overline{\gamma}-\underline{\gamma} > 0$ and $B = \underline{\gamma} < 0$ are coefficients. The challenge is first to put this on conic quadratic form, and then to extend its validity to all of $y \in \mathbb{R}$, while staying tight on $y > 0$. On the first challenge of conic representability, our analysis led to a small surprise, namely, the possibility to represent harmonic means (see this blog post). The inequality $\eqref{eq:cut}$ is thus representable on $y > 0$ by $\overline{\beta}(\overline{\gamma} x - u) + h \leq \overline{\beta}(\overline{\gamma} - t)$ and
$$\label{eq:repr:ypos} h = A y + B g,\quad \left(\begin{matrix}0.5 (y-g)\\ y+v\\ y\end{matrix}\right) \in Q_r^3,\quad v \geq 0.$$

We stress that the conic form is a proof of convexity. Next is the challenge of extending this representation to be valid on all of $y \in \mathbb{R}$, while staying tight on $y > 0$. That is, as proven in the paper, we need to extend the representation such that $h \leq 0$ on $y \leq 0$. Notably, however, the current representation in $\eqref{eq:repr:ypos}$ fails this criterion as shown by the graph of $h(y, 5)$ where $A=1$ and $B=-3$.

For this task we deploy another reformulation technique which, in lack of better names, may be denoted as stretching the extremum. The idea is to locally, in the representation of $h(y,v)$, exchange $y$ by $\hat{y} = y + s$ for some amount of stretch $\underline{s} \leq s \leq \overline{s}$. To see the effect of this let $y_{min} = \arg\min_{y \in \mathbb{R}} h(y,v)$, and note that
$$\min_{s \in \mathbb{R}} h(y+s,v) = \begin{cases} h(y+\overline{s},v), & {\rm if\ } y_{min} \geq y+\overline{s}, \\ h(y_{min},v), & {\rm if\ } y+\underline{s} \leq y_{min} \leq y+\overline{s},\\ h(y+\underline{s},v), & {\rm if\ } y+\underline{s} \geq y_{min}. \end{cases}$$

The visual interpretation of this is that we cut the graph of the function in two parts at $y_{min}$, move the two parts away from each other along the y-axis by amounts determined by $\underline{s}$ and $\overline{s}$, and finally fill in the gap with the extremum value $h(y_{min},v)$. In case of $\eqref{eq:repr:ypos}$ we use $\underline{s} = 0$ and $\overline{s} = \infty$ to allow $h$ to take smaller values of $h(y,v)$ at any higher value of $y$. In the graphed example of $h(y, 5)$ above, we get that the lower bound on $h$ is relaxed to the left of the extremum.

As $h(0,v) = 0$ for all $v \geq 0$, we get the desired property that $h \leq 0$ is possible on $y \leq 0$. In conclusion, the globally valid conic quadratic representation of the convex inequality we started out from, is given by $\overline{\beta}(\overline{\gamma} x - u) + h \leq \overline{\beta}(\overline{\gamma} - t)$ and

$$h = A \hat{y} + B g,\quad \left(\begin{matrix}0.5 (\hat{y}-g)\\ \hat{y}+v\\ \hat{y}\end{matrix}\right) \in Q_r^3,\quad v \geq 0, \quad \hat{y} \geq y.$$

## Friday, March 23, 2018

### Easter holidays 2018

Support and sales will be closed for Easter holidays

from Thursday, March 29th until Monday, April 2nd (inclusive)

We wish everyone a good Easter.

MOSEK Team

## Thursday, March 15, 2018

### AOO Copenhagen 23rd April 2018

Again this year MOSEK is one of the sponsors of the Applications of Optimization day (AOO 2018), the annual meeting of the Danish Operational Research Society (DORS). It is a great opportunity to get in touch with people interested in various aspects of optimization.

The meeting will be held at Industriens Hus, in the very heart of Copenhagen (DK) on April 23rd, 2018. Please check the official web site for more information.

## Wednesday, March 14, 2018

### Harmonic mean, p-norms and extremely disciplined conic modeling

Contrary to what we inadverently claim in our modeling cookbook, there IS a conic quadratic model of the harmonic mean. More precisely, we can model the constraint
$$0\leq t\leq \frac{n}{\frac{1}{x_1}+\cdots+\frac{1}{x_n}},\quad x_i>0$$
Here is the model, which at the same time demonstrates the non-obvious fact that the right-hand side of (1) is actually concave. We call this approach extremely disciplined modelling, since by definition it prevents users from creating a non-convex model. We first notice that (1) is equivalent to
$$\sum_{i=1}^n\frac{t^2}{x_i}\leq nt$$
which can be written using rotated quadratic cones $2uv\geq w^2$, $u,v\geq 0$, as follows:
$$\begin{eqnarray} t^2\leq 2x_iy_i, &\quad i=1,\ldots,n \\ 2\sum_i y_i = nt. & \end{eqnarray}$$
For the special case $n=2$ we can model $t\leq \frac{2xy}{x+y}$ with just one rotated cone: this inequality is equivalent to $2(x-\frac12 t)(y-\frac12 t)\geq (\frac12t)^2$.

The same way we can model (and prove concativity of) a general "negative $p$-norm":
$$0\leq t\leq (\sum_i x_i^{-p})^{-1/p}$$
where $p>0$, $x_i>0$ and the harmonic mean corresponds to $p=1$ (up to a factor $n$). The equivalent version is
$$\sum_{i=1}^n\frac{t^{p+1}}{x_i^p}\leq t$$
$$\begin{eqnarray} |t|\leq x_i^{\frac{p}{p+1}} y_i^{\frac{1}{p+1}}, &\quad i=1,\ldots,n \\ \sum_i y_i = t. & \end{eqnarray}$$
Constraint (7) defines the three-dimensional power cone with exponents $(\frac{p}{p+1},\frac{1}{p+1})$. Conic models with the power cone will be expressible in Mosek 9, see https://themosekblog.blogspot.dk/2018/01/version-9-roadmap_31.html.

### Why we use projection to measure error

In classic nonlinear optimization you typically measure the error of a constraint,

$$\label{eq:nlpfunc} f(x) \leq 0,$$

by the value $\max(0, f(x))$. The problem with this measure is that reformulations such as $1 000 000 f(x) \leq 0$, or $f(x)^3 \leq 0$, or even $\exp(f(x)) - 1 \leq 0$ show very different errors for the exact same point. This makes it highly representation dependent. In fact, in this measure, the inevitable error introduced by rounding the true algebraic solution $x^*$ to its nearest floating-point representation even depends on the rate-of-change in $f(x)$. As example, this error grows significantly with higher values of $x^*$ in $\exp(x)$, as well as with lower positive values of $x^*$ in $1/x^2$.

In conic optimization the nonlinearities you model are independent of representation. This is because you only ever constrain variables to a cone, i.e., a set of points. This allows us to change and tune internal representations for optimal performance. Correspondingly, we also want the error reports you receive from MOSEK to be independent of representation. Our choice of projection, being the minimum distance from point to cone, is a widely known, computationally inexpensive and mathematically well-defined concept that fits the glove. This definition, however, does not always fit the intuitive understanding of error as captured by the following question we sometimes get:

Q: The result $\hat x = (0, 1{\rm e}18, 1{\rm e}3)$ fails to satisfy $2 x_1 x_2 \geq x_3^2$; the left is zero and the right is a million! Why does MOSEK claim feasibility when the conic domain $x \in \mathcal{Q}_r^3$ is so clearly violated?

A: Note that $x^* = (5{\rm e-}13, 1{\rm e}18, 1{\rm e}3)$ is feasible, showing that the projection distance to the conic domain is less than $\| x^* - \hat x\|_2 = 5{\rm e-}13$. This is a small violation in floating-point computations. In the particular case of $\hat x$, we may reformulate the constraint to $x_1 \geq \frac{x_3^2}{2x_2}$ and obtain a comparable error measure by direct comparison of left and right.

If you wonder why there are no feasibility tolerances on conic domains, the answer is simple. They are satisfied symbolically in all computations, even if the interior-point solver is terminated before convergence. The small violations you may see are due to floating-point arithmetic.

Finally, if you want to know more about projection theory, computation and how we use it in MOSEK, we invite you to Henrik's session at ISMP 2018.

### Happy Pi Day

Code:

from mosek.fusion import *
from math import sqrt

def pi(n):
M = Model()
x = M.variable(n, Domain.inQCone())
M.constraint(Expr.mulElm( range(1,n), x.slice(1,n) ),
Domain.greaterThan(sqrt(6)))
M.objective(ObjectiveSense.Minimize, x.index(0))
M.solve()
return M.primalObjValue()

print pi(100)
print pi(1000)
print pi(10000)
print pi(100000)
print pi(200000)

Output:

3.13198074852
3.14063710035
3.14149715507
3.14158310484
3.14158824714

Hint:

$\sum_{i=1}^\infty\frac{1}{i^2}=\frac{\pi^2}{6}$.

## Friday, March 9, 2018

### MATLAB Computational Finance Conference 2018

MOSEK is one of the exhibitors at the MATLAB Computational Finance Conference 2018 in London on the 24th May.

The event will include keynote presentations from The Bank of England and MathWorks as well as sessions with leading financial practitioners and academic institutions.

For details, speakers and registration see

## Friday, March 2, 2018

### ISMP 2018

MOSEK is one of the sponsors of the 23rd International Symposium on Mathematical Programming ISMP 2018 which takes place in Bordeaux on 1-6 July 2018.

There will be plenty of opportunities to meet us in Bordeaux. Here are our tentative talk titles:
• Erling Andersen, MOSEK version 9
• Joachim Dahl, Extending MOSEK with exponential cones
• Henrik A. Friberg, Projection and presolve in MOSEK: exponential and power cones
• Sven Wiese, Mixed-integer Conic Optimization in MOSEK in a session on Mixed-integer Conic Optimization Sven is organizing.
• Michał Adamaszek, Exponential cone in MOSEK: overview and applications
Stay tuned for more details closer to the date.