Friday, February 7, 2025

A small breakthrough in the modeling of polynomial denominators

In 2018, we challenged our community to find convex functions that could not be reformulated in terms of the cones supported in MOSEK. A long-standing contender for this challenge was

$$ \frac{1}{x^4 + x^2}\quad \text{ for } x \geq 0, \label{eq:challenge}$$

but as of today, we are finally able to present a conic reformulation of this function guided by past experiences. In this blogpost we take a look at the insights that enabled this.

Before diving into the details, however, we would like to clarify that conic optimization is able to encompass any convex function, including \eqref{eq:challenge}, via perspective reformulation. The problem, however, is that the resulting cones have widely different computational properties (some even being NP-hard to optimize over!). Restricting attention to the set of cones supported in MOSEK, we effectively limiting ourselves to representations with practical large-scale performance. As suggested by the community challenge, however, it turns out to be quite hard to find convex functions without a computationally efficient conic representation.

So what makes \eqref{eq:challenge} so difficult?

Insight 1: Polynomial denominators

For a polynomial $g(x)$ with real roots $g_1 \leq \ldots \leq g_k$, we can represent

$$t \geq \frac{1}{g(x)} = \frac{1}{\alpha(x - g_1)\cdots(x - g_k)}$$

on a convex subinterval, $[g_p, g_{p+1}]$, via a single power cone

$$ t \alpha (x - g_1)\cdots(x - g_p)(g_{p+1} - x)\cdots(g_k - x) \geq 1,$$

by flipping signs of the factors, $(x - g_i)$, so that they are all nonnegative on the subinterval of interest. This method can also represent the left and right tail, $[-\infty, g_1]$ and $[g_k, \infty]$, if convex. The issue with $g(x) = x^4 + x^2 $ is that it has a complex conjugate root pair, $\pm i$, and so the closest we can get in this direction is

$$ t x^2 (x^2 + 1) \geq 1. \label{eq:step1}$$

Insight 2: Nonlinear substitution

A natural first attempt would be to substitute out the convex term, $x^2 + 1$, from \eqref{eq:step1} using

$$ t x^2 r \geq 1,\quad r \geq x^2 + 1,$$

but this fails since $r \rightarrow \infty$ satisfy both constraints without enforcing the desired relation. We thus resort to a lesser known trick, namely to rewrite $x^2 + 1 = f(f^{-1}(x^2 + 1))$ and then substitute out $f^{-1}(x^2 + 1)$. With this in mind, it would be natural to try

$$ t x^2 r^p \geq 1,\quad r \leq (x^2 + 1)^{1/p},$$

for some suitable power $p$, but this fails because $(x^2 + 1)^{1/p}$ is not concave on all of $x > 0$ for any $p \geq 1$. Nevertheless, with a bit of persistence, we finally manage to satisfy all composition rules with

$$ t x r^4 \geq 1,\quad r \leq (x^3 + x)^{1/4}, \label{eq:step2}$$

where the latter is a convex set in need of a conic reformulation.

Insight 3: Signed sum-of-quartics decomposition

The signed sum-of-squares decomposition was introduced in a previous talk on our MOSEK channel, as a new reformulation tool with great applicability. In case of $(x^3 + x)^{1/4}$, the fourth root suggests need of a signed sum-of-quartics instead. We do not know of any algorithm to perform this decomposition, but using the method of undetermined coefficients (guessing the final form, and mapping coefficients via quantifier elimination), we obtain

$$ x^3 + x = (ax + a)^4 - (ax - a)^4,$$

where $a = \frac{1}{2^{3/4}}$. This allows us to rewrite \eqref{eq:step2} as

$$ t x r^4 \geq 1,\quad r^4 + (ax - a)^4 \leq (ax + a)^4, \label{eq:step3}$$

where the former is a power cone and the latter is the p-norm cone of order 4, which is a well-known conic representable set (we refer to the MOSEK Cookbook). 

This concludes our reformulation from the convex function \eqref{eq:challenge} to the conic representation \eqref{eq:step3}. In time, we shall see if this process extends to a wider range of polynomial denominators.

Thursday, January 30, 2025

MOSEK 11 is now available

 MOSEK 11 has now been released!

You will find all the necessary information, installation instructions and documentation on the website of the new version. To try it out you must obtain a new license file, either by applying/reapplying for a trial/academic license on our website or contacting our sales if a customer.

The main features of MOSEK 11 are

  • substantial efficiency improvements in the mixed-integer optimizer,
  • a new API for MATLAB,
Enjoy MOSEK 11 and we are looking forward to your feedback.

The MOSEK team

Thursday, January 16, 2025

The interior-point optimizer and warm-start

A frequently asked question by MOSEK users is whether the interior-point optimizer can be warm-started. The short answer is that no warm-start is available for the interior-point optimizer.

The reason for the lack of a warm-start capability is that no general-purpose interior-point warm-starting method that

  • is robust i.e. numerical stable,
  • and leads to consistent performance benefits
has been suggested,

The paper contains some limited computational evidence that an interior-point warm-start may be possible.  Note that the paper shows that in the best case where both a good primal and dual solution guesses are available, the number of iterations is reduced by 50%.  However, the paper does not consider the complications of integrating the warm-start with the presolve procedure in a commercial optimizer like MOSEK. Also, an interior-point method benefits a lot from the presolve and which has a fairly computationally expensive setup step that is independent of a warm-start.  Hence, a large reduction in the number of iterations does not translate into an equally large decrease in the optimizer time due to a large initialization cost.

Based on the discussion above it can be concluded that how to warm-start an interior-point optimizer is an open question. Furthermore, obtaining a 50% reduction in the optimizer time by wam-starting is unlikely even if it was possible to warm-start. Something like from 0 to 25% reduction in the optimizer time is much more likely if a good warm-start method was known.