Wednesday, July 15, 2020

CO@Work 2020

From our colleagues organizing the Combinatorial Optimization at Work 2020 summer school CO@Work 2020 here is a short advert of this very interesting event:



In its sixth instantiation, CO@Work will be a full online event, taking place from September 14 to 26, 2020.

This summer school addresses master students (in their final year), PhD students, post-docs and everyone else interested in combinatorial optimization and mathematical programming in industrial applications. Lectures will be held by around 30 experts from all over the world, including leading researchers from TU Berlin, FU Berlin, Polytechnique Montréal, RWTH Aachen, University of Southern California, University of Edinburgh, TU Darmstadt, University of Exeter as well as developers and managers of FICO, Google, Amazon, SAP, Siemens, IBM, SAS, Gurobi, Mosek, GAMS, Litic, and many more speakers.

We arranged a setup that allows us to cover all time zones: pre-recorded lectures plus two live Q&A sessions and two hands-on exercise sessions per day (11 hours apart).

You can find more information and a registration link here: http://co-at-work.zib.de/

Registration is for free, and as a TU Berlin course, it gives 10 ECTS credit points.

Friday, June 5, 2020

To upgrade or not to upgrade that is the question

A customer complained yesterday that Mosek version 7.1 did not run on a recent AMD CPU. Perhaps not that surprising since the last version 7.1 release was in 2016 and any case that version is no longer supported. 

Now we tried to run Mosek version 8.1 and 9.2 on the public available benchmark problem pds-90 on a recent Dell server with a

AMD EPYC 7402P 24-Core Processor

CPU. Version 8.1 took 169 seconds whereas 9.2 took 67 seconds. The reason for the huge difference is that Mosek normally employs the Intel MKL Blas library to perform dense matrix operations quickly. However, it performs poorly on recent AMD CPUs. Therefore starting in version 9 we switch to the Blis library when running on AMD CPUs. That switch provides the big boost.

So from this we can learn that upgrading to a newer Mosek version can provide a huge performance boost. Particularly when Mosek is employed on brand new hardware.

Now the customer was not too keen to upgrade from version 7.1 to version 9.2. Supposedly fearing compability issues which is understandable. However, if he had upgraded to version 8.1 earlier then the upgrade to version 9.2 would have been smaller. Therefore, by not upgrading you of course save some hassle but you pay the price the date you are forced upgrade. That date might be the date when your old Mosek version performs poorly on your brand new hardware.

Therefore, we recommend not to fall too much behind with upgrading. In particular if for instance version 9 is the current major version then you should at least be using a version 8.


Wednesday, May 6, 2020

n-queens in Fusion

In a nice post on LinkedIn by Alireza Soroudi the author presents a short GAMS integer model for the classical n-queens puzzle: place the maximal number of non-attacking queens on an $n\times n$ chessboard.

We couldn't resist writing it up in Python Fusion as well.

And here is the solution we get for $n=8$:
[[_ ♕ _ _ _ _ _ _]
 [_ _ _ _ _ ♕ _ _]
 [♕ _ _ _ _ _ _ _]
 [_ _ _ _ _ _ ♕ _]
 [_ _ _ ♕ _ _ _ _]
 [_ _ _ _ _ _ _ ♕]
 [_ _ ♕ _ _ _ _ _]
  [_ _ _ _ ♕ _ _ _]]

Monday, May 4, 2020

Grouping in Fusion

We sometimes get asked how to efficiently perform a "group by" operation on a variable. That is, we have a variable $x=(x_0,\ldots,x_{n-1})$ naturally divided into groups and we want to add constraints for each group or constraints on aggregate values within groups. This arises for instance in portfolio optimization where the groups are assets, each consisting of a (varying) number of tax lots.

To keep the discussion more concrete, suppose we have a variable $x=(x_0,x_1,x_2,x_3,x_4,x_5)$ which consists of 3 groups $(x_0,x_1,x_2)$, $(x_3)$ and $(x_4,x_5)$. Let's say we want to express:
  • an upper bound $b_i$ on the total value within each group,
  • a joint volatility constraint such as $$\gamma\geq\left\|G\cdot \left[\begin{array}{c} x_0+x_1+x_2-i_0 \\ x_3-i_1 \\ x_4+x_5-i_2\end{array}\right]\right\|_2$$ for some matrix $G$ and constant index weights $i_0,i_1,i_2$
  • the constraint that all values within one group have the same sign.

Pick, slice
A straightforward solution is to pick (Expr.pick) the content of each group into a separate view and add relevant constraints in a loop over the groups. One could also take slices (Expr.slice) when the groups form contiguous subsequences. This approach is implemented below in Python Fusion.

Loop-free
The previous solution requires picking and stacking expressions in a loop over all groups. This can sometimes be slow. A nicer and more efficient solution uses a bit of linear algebra to perform the grouping. We first encode the groups via a sparse membership matrix $\mathrm{Memb}$, where the rows are groups, columns are entries of $x$, and there is a $1$ whenever an entry belongs to a group. In our example $$\mathrm{Memb}=\left[\begin{array}{cccccc}1&1&1&0&0&0\\0&0&0&1&0&0\\0&0&0&0&1&1\end{array}\right] .$$
This matrix is easy to construct in Fusion. 
Note that $\mathrm{Memb}\cdot x$ is the vector of all group sums, in our case $$\mathrm{Memb}\cdot x = \left[\begin{array}{c} x_0+x_1+x_2 \\ x_3 \\ x_4+x_5\end{array}\right].$$
That means we can express both of the previous models in a single call without looping. Since $\mathrm{Memb}$ is very sparse, this becomes a very efficient representation. Of course it is important to keep $\mathrm{Memb}$ as a sparse matrix.
Loop-free same sign
We can now use the same matrix to model the last problem from the introduction: all entries within each group must have the same sign. We introduce a sequence of binary variables $z=(z_0,\ldots,z_{g-1})$, one for each of the $g$ groups. The $j$-th variable will determine the sign of elements in the $j$-th group. That is imposed by constraints $$-M(1-z_j)\leq x_i\leq Mz_j,$$ whenever $x_i$ belongs to $j$-th group. We can use the pick/slice strategy per group, as before, or observe that $\mathrm{Memb}^T\cdot z$ produces the vector with the correct binary variable for each entry in $x$. I our case, if $z=(z_0,z_1,z_2)$ then $$\mathrm{Memb}^T\cdot z = (z_0,z_0,z_0,z_1,z_2,z_2)^T.$$ Now each inequality in (4) can be written as a single constraint:


Monday, April 6, 2020

Easter 2020

Support and sales are closed from Thursday, Aptil 9th, until Monday, April 13th, inclusive.

The MOSEK team wishes everyone safe Easter.

Wednesday, March 11, 2020

Coronavirus COVID-19

March 11th: As of now Denmark locks down for at least 2 weeks in response to the coronavirus pandemic. In the interest of public health we close the MOSEK physical office and continue working from home. At the moment we don't expect any major disruptions, although small delays in e-mail responses are possible. Thank you in advance for your patience.

March 16th: If you are working from home and for this reason require some special arrangements regarding the license, send us an email to support@mosek.com.

We will update this post if there is important new information.

Monday, March 9, 2020

Parametrized Fusion - benchmarks

We present some indicative benchmarks of the new Parametrized Fusion, to be released in MOSEK 9.2. We compare a few typical optimization models being reoptimized in two scenarios:

  • the model is constructed from scratch with new data for each optimization, MOSEK version 9.1 (green bars)
  • the parametrized model is constructed once and parameters are set before every optimization, MOSEK version 9.2 (blue bars)
The red bar in all cases represents the amount of time spent in the optimizer.

1. Markowitz portfolio optimization

Here we consider a simple model
$$\begin{array}{ll}\mbox{maximize} & \mu^T x \\ \mbox{subject to} & e^Tx=1,\\                                & \gamma \geq \|Fx\|_2, \\                              & x\geq 0, \end{array}$$

where
  • $x\in\mathbb{R}^{2000}$,
  • $F\in \mathbb{R}^{200\times 2000}$ is a dense matrix,
  • $\gamma$ is the only parameter changed between optimizations,
  • we resolve the model for 200 values of $\gamma$.
Note that by parametrizing we avoid inputting the same big matrix $F$ over and over again.

2. Various models

Next we run a few models implemented in Python only.
  • MPC - Model Predictive Control similar as in this demo with 15 states, 8 inputs, horizon length $T=80$, parametrized by the initial state, 100 resolves,
  • Polytopes - a geometric polytope containment problem, the parameter is a $3\times 3$ matrix $P$ which enters in many constraints of the form $Px=y$, 1000 resolves.
  • Elastic - a linear regression model with main term $\|Ax-b\|_2$, where $A\in\mathbb{R}^{3000\times 100}$ is dense, parametrized by $b\in\mathbb{R}^{3000}$, 60 resolves.
  • Option - a big option pricing model with complicated structure and multiple parameters, 10 resolves.