Thursday, December 19, 2024

Closed during Christmas

 During the Christmas period 2024 the MOSEK office, support and sales are not available on

24-29 and 31 December, 1 January

The MOSEK team

Monday, December 16, 2024

Second Order Christmas Programming

Santa's life used to be much easier. He would split his funds for the year equally between all children and give each child some gifts from their wishlist amounting to roughly that value. 

But the corporate rules are creeping in also beyond the Polar Circle and the Customer Satisfaction Department decided that from this year on every child should simply get all the presents they asked for in their letter, and they will even give Santa a budget exactly sufficient for that. While this method is also very simple, Santa is fuming: it is unfair, unsustainable in the long run and most of all against the spirit of Christmas. So after long negotiations the corporation made some concessions for Santa, so that the new distribution rules should balance between two objectives:

  •  each child should receive gifts of total value similar to what they wished for,
  •  children living close to each other should receive gifts of similar value.

Santa has since calmed down but he still has to find the right tradeoff of the objectives to both convince the corporate and not betray his ideals (too much). You have been tasked with helping him. The children live on an $n\times m$ grid, each one requested gifts of total value normalized to the interval $[0,1]$ and your budget is the total value of the requests.

Suppose the children have wishes of value $w(i,j)$, $i=1,\ldots,n$, $j=1,\ldots,m$ and Santa will assign them gifts of value $g(i,j)$, to be optimized. Then Santa (who is a big fan of SOCP and only ever works with the $L_2$-norm) would like to try an optimization problem such as $$\begin{array}{ll}\textrm{minimize} & \gamma_{\mathrm{ful}}\cdot \|g-w\|_2 + \gamma_{\mathrm{fair}} \cdot \|(\ldots,g_{i+1,j}-g_{i,j},g_{i,j+1}-g_{i,j},\ldots)_{i,j}\|_2 \\ \textrm{subject to} & \mathrm{sum}(g)=\mathrm{sum}(w).\end{array}$$

The first objective term represents fulfilment, and measures the gap between the gifts and expectations while the second represents fairness, and measures the total variation between each child and its immediate neighbours in the grid in terms of gift value, taken over the whole grid. Santa wants to experiment with the choices of weights, where setting $(\gamma_{\mathrm{ful}}, \gamma_{\mathrm{fair}}) = (0,1)$ will result in his good old equitable division and $(\gamma_{\mathrm{ful}}, \gamma_{\mathrm{fair}}) = (1,0)$ solves for the corporate idea of perfect fulfilment.

Fortunately for Santa this is straightforward in the MOSEK Fusion API for Python:

Now he is ready to experiment with the datasets for various areas, which he represents as heat maps with more red shift meaning more valuable gifts and more blue shift meaning cheaper gifts.

First column: the wishlists. Remaining columns: gift distributions from more emphasis on fairness to more emphasis on fulfilment.

Equipped with this knowledge Santa will have to propose some weights $(\gamma_{\mathrm{ful}}, \gamma_{\mathrm{fair}})$ and we leave him here with these considerations. Remember, if you don't like the direction Santa's service is going you can always switch to one of the many independent vendors, and you will find a list here: List of Christmas gift-bearers (Wikipedia) .

Wednesday, November 27, 2024

The interior-point optimizer in Mosek

 Mosek features 

  • a simplex optimizer,
  • an interior-point optimizer
  • and a mixed integer optimizer.
In this post, we review the history of the interior-point optimizer in Mosek and provide references to relevant publications for further information.

The interior-point optimizer implements a primal-dual interior-point algorithm invented in the decade after the groundbreaking 1984 publication of a polynomial algorithm for linear optimization by N. Karmarkar. Mosek employs the so-called homogeneous self-dual variant which has the following advantages:
  • a starting point is readily available.
  • reliable detection of a potential primal and dual infeasibility.
  • robustness.
The note describes an elementary version of the algorithm for linear optimization problems.  Furthermore, the paper provides some details about how the algorithm is actually implemented in Mosek. 

In the early nineties, the primal-dual interior-point algorithm for linear optimization problems was generalized to conic optimization problems involving the so-called self-scaled cones by Y. Nesterov and M. Todd. Later a self-scaled cone was shown to be the same as a symmetric cone. Inspired by the work of the late J. Sturm on the software package SeDuMi the interior-point optimizer in Mosek was extended to handle the
  • the quadratic cone,
  • and the semi-definite cone.
The paper has a lot of details about the algorithm and its implementation in Mosek for the conic quadratic case. 

By definition a symmetric cone  is
  • self-dual
  • and homogeneous.
For instance, the power and exponential cones are not symmetric; hence, the primal-dual algorithm by Y. Nesterov and M. Todd cannot be used to solve conic optimization problems. However, based on an  idea by L.Tuncel, the interior-point optimizer in Mosek has been extended to handle the exponential and power cones. The details of the extension to the exponential cone are presented in the paper.

   

 


Thursday, November 7, 2024

MOSEK 11 beta released!

We are pleased to announce the release of MOSEK 11.0 Beta.

You will find all the necessary information, installation instructions and documentation on the website of the new version. To try it out you must obtain a new license file, either by applying/reapplying for a trial/academic license on our website or contacting our sales if a customer.

The main features of MOSEK 11 are

  • substantial efficiency improvements in the mixed-integer optimizer,
  • a new API for MATLAB,
and in the near future you can expect a series of blogposts discussing these topics in more detail.

Enjoy MOSEK 11 and we are looking forward to your feedback.

The MOSEK team

Wednesday, August 28, 2024

Autumn is coming!

 Currently we are experiencing some lovely summer weather here in Copenhagen, but there is a feeling that the autumn is right around the corner. For MOSEK the arrival of autumn means the arrival of thousands of new persons eligible to use MOSEK for free.

This is because MOSEK, through our academic initiative, grants free licenses for research or educational purposes at degree-granting academic institutions.

Whether you are a new student or a seasoned academic why not use this opportunity to try MOSEK!

Friday, July 19, 2024

MOSEK is a proud sponsor ISMP 2024

Next week the 25th International Symposium on Mathematical Programming will take place in Montreal. MOSEK is a proud sponsor of this conference. We will also have a presence at the conference. 

If you have inquiries about MOSEK that you want to address in person, your best bet is to look out for our CEO Erling D. Andersen at the conference.

If your happy with having your inquiry answered through email you can always reach out to support@mosek.com.  


Tuesday, June 25, 2024

Mosek is proud to sponsor EUROPT 2024.

This week starting on Wednesday June 26 EUROPT 2024 will be held at Lund university, hosted by the Department of Automatic Control.  

Moseks industrial phd student Utkarsh, along with his academic supervisor Martin S. Andersen, will be hosting a track on developments on the interior point method.

So if you are attending the conference and want to know more about Mosek, and maybe get your hands on a hard copy of our  Modelling Cookbook, keep on eye out for Utkarsh!

If you're not attending the conference you can always find our cookbook online and reach us at support@mosek.com or sales@mosek.com if you have any inquires.


Thursday, June 13, 2024

Mosek support for Windows on an ARM CPU

Until recently mainly Intel and AMD CPUs were employed in mainstream laptop and server computers. However, recently ARM based CPU has gained popularity, especially in the Apple world. Since there is a Windows version for ARM then a natural question is: when Mosek will support Windows on ARM?

Ultimately, Mosek is likely to be available for Windows on ARM. However, at least the following 3 conditions must be satisfied for that to happen:  
  1. Appropriate hardware must be available, e.g. we must be able to buy ARM based servers from DELL.
  2. The whole software ecosystem must be sufficiently mature, e.g. all the tools we require to build Mosek must be available.
  3. Some external software libraries such as an appropriate BLAS library must be available.
Based on historical experience it is unlikely that Mosek will be available for Windows on ARM  within the next 12 months. However, feel free to contact Mosek support if you would like to use Mosek on Windows ARM in the near future.

Note Mosek is available for Linux run on an ARM CPU.

Wednesday, June 5, 2024

The dangers of large bounds

Let's say we have a well scaled optimization problem with a very good and robust primal-dual optimal solution, which solves without any numerical difficulties. It could be a linear problem in standard form $$\begin{equation}\begin{array}{ll}\textrm{minimize}&c^Tx\\ \textrm{subject to}& Ax\leq b,\end{array}\end{equation}$$ where the primal solution $x$ and dual solution $y$ are of reasonable magnitude and satisfy $Ax\leq b$, $y\geq 0$, $A^Ty+c=0$, $b^Ty=c^Tx$ with the best achievable precision. In other words, everything is as nice as it could be.

Obviously (right?) nothing should change if we extend the problem with huge bounds on $x$, so large that they are completely redundant for the given problem: $$\begin{equation}\begin{array}{ll}\textrm{minimize}&c^Tx\\ \textrm{subject to}& Ax\leq b,\\ & x\leq M.\end{array}\end{equation}$$ 

To see what might go wrong let us first understand what the solver might think about problems whose feasible set lies very far out. The simplest example might be $$\begin{equation}\begin{array}{ll}\textrm{minimize}&x\\ \textrm{subject to}& x=10^{12}.\end{array}\end{equation}$$ with the optimal solution $x=10^{12}$. But is that problem feasible from a practical perspective? If we work in the realm of practical applications with solutions of reasonable magnitude, then one might argue that this simple problem is for all intents and purposes infeasible, since the first feasible point lives further than we care to look. This can be made precise using Farkas lemma: the infeasibility certificate for this problem would be a value $y$ such that $y\leq 0$ and $10^{12}y=1$. A very good practical example of such a $y$ is $y=10^{-12}$ - it violates the conditions by only $10^{-12}$, failing to satisfy $y\leq 0$ by this narrow margin. This was just a toy example, but it easy to imagine that for a nontrivial optimization problem the solver might converge either to a solution or to such an infeasibility certificate, both of great quality! That makes problems with large solutions potentially hard to numerically distinguish from infeasible.

The same logic can now be applied to the problem (2) with redundant large bounds. The infeasibility certificate for it will be a pair of vectors $y,\tilde{y}$ such that $$A^Ty+\tilde{y}=0,\quad b^Ty+M\cdot \mathbb{1}^T\tilde{y}=-1,\quad y\geq 0,\quad \tilde{y}\geq 0.$$ If we let $y$ be the dual solution to the original problem (1) then by choosing a very small negative $\tilde{y}$ (in the order of $M^{-1}$) we can easily satisfy the second condition in (4), while the first and fourth one will only be violated by a tiny amount (of order $\tilde{y}$). As counterintuitive as it might be the math works out and we get certificate of primal infeasibility perfectly acceptable in finite precision arithmetic, with only tiny violations.

The rule of thumb one can remember connecting examples (2) and (3) is the following: the bound $x\leq M$ can be written in equality form as $x+\tilde{x}=M,\ \tilde{x}\geq 0$, so even with reasonable values of $x$ we get that the value of $\tilde{x}$ in any solution must be huge, putting us in position of example (3).

Similar considerations apply to distinguishing primal and dual infeasibility from feasibility in the case of large coefficients in the objective and other scaling problems; this applies for instance to big-M models in integer optimization, large penalties etc.

Will this come up in practice? Yes. Perhaps not for simple linear problems and with the extensive presolve MOSEK has, but it is not hard to set up a relatively straightforward conic problem, solve it, add huge bounds, turn presolve off and see MOSEK return a high quality infeasibility certificate. We leave these experiments to the reader.

Wednesday, May 22, 2024

Mosek 10.2 and floating licenses

MOSEK 10.2 ships with the licensing software (FlexLM) upgraded to version 11.19.6. This has two implications.

  • On Linux, the new floating license server drops the long-lasting dependency on the LSB (Linux Standard Base) package. In particular, it can be installed on Linux distributions without LSB, primarily RHEL9 and Debian.
  • All floating license users who want to upgrade the clients to Mosek 10.2 must also upgrade the token server to one from Mosek 10.2, as the new clients will not talk to older token servers. As always, an upgrade of the token server does not affect older Mosek clients which will continue to work. See the licensing manual for more info.

Have a look at the installation manual for the token server and the release notes for MOSEK 10.2.

MOSEK 10.2 and Pythonic Fusion API

MOSEK 10.2 introduces a feature that we hope will be interesting for all users of our Python Fusion API, which we call Pythonic Fusion

Pythonic Fusion is an overlay on top of the current Fusion API (or, as some would call it, syntactic sugar) which implements standard Python operators for Fusion objects (variables, parameters, expressions). There are operators for manipulating linear expressions:

+, -, *, @, [:], .T

relational operators for introducing constraints:

==, <=, >=

and operators for introducing AND- and OR-clauses in disjunctive constraints:

&, |

The operators are used in the most natural way familiar from NumPy, CVXPY, Pandas or any other data processing tool in Python, so we restrict to giving a simple example. The following Python Fusion constraint

M.constraint(Expr.mul(G, Expr.sub(x.transpose(), y.slice(0, n))),                          Domain.lessThan(c))

will be written in Pythonic Fusion as

M.constraint(G @ (x.T - y[:n]) <= c)

A more comprehensive introduction to the new syntax can be found in the manual. Examples using Pythonic Fusion are scattered throughout the manual, in particular in the portfolio optimization case study, and with time we will adapt our other Python Fusion resources to the new syntax. You are welcome to share your suggestions, comments or bug reports with us.

Important. In MOSEK 10.2 Pythonic Fusion is a separate add-on, available only after importing:

import mosek.fusion.pythonic


Tuesday, March 26, 2024

Easter 2024

 Due to Easter support and sales will be closed from Thursday, March 28th, until Monday, April 1st, both days inclusive.


The MOSEK team wishes everyone a peaceful Easter.

Thursday, March 14, 2024

Pi day and the geometric mean cone

Happy Pi Day! If Euler wasn't so keen on complex numbers then instead of $e^{i\pi}+1=0$ he would certainly have declared $$\pi^3+1=2^5$$ as the most beautiful equation. (Both sides indeed equal $32.0$). We can easily give a formal proof of this identity with MOSEK. Note that we need to simultaneously prove two inequalities $$\sqrt[3]{31}\geq \pi\quad \mathrm{and}\quad \sqrt[5]{\pi^3+1}\geq 2.$$ In MOSEK we have the geometric mean cone of dimension $n+1$: $$\mathcal{GM}^{n+1}=\{(x_1,\ldots,x_n,t)~:~x\geq 0, (x_1\cdot\ldots\cdot x_n)^{1/n}\geq |t|\},$$ so all we need to check is the feasibility of the conic problem $$(31,1,1,\pi)\in\mathcal{GM}^4,\quad (\pi^3+1,1,1,1,1,2)\in\mathcal{GM}^6.$$ This is straightforward with, for instance, Python Fusion:

M.constraint(Expr.constTerm([31, 1, 1, pi]), 

             Domain.inPGeoMeanCone())

M.constraint(Expr.constTerm([pi**3+1, 1, 1, 1, 1, 2]), 

             Domain.inPGeoMeanCone())

and voila! here is our proof:

Interior-point solution summary
  Problem status  : PRIMAL_AND_DUAL_FEASIBLE
  Solution status : OPTIMAL
  Primal.  obj: 0.0000000000e+00    nrm: 1e+00    Viol.  var: 0e+00    acc: 1e-04  
  Dual.    obj: 0.0000000000e+00    nrm: 0e+00    Viol.  var: 0e+00    acc: 0e+00

Bonus question for the advanced user: which Mosek parameters do you have to modify to get this answer? Can you get away with only modifying one parameter by not too much? You can try to run it and find out yourself here.

Wednesday, March 6, 2024

Ferrari, factorizing, and portfolios in conic form

1. Suppose we want to solve the equation $$\begin{equation*}(x^2-3x+2)(x-3)(x-4)=0.\end{equation*}$$ The straightforward approach is to multiply everything out: $$\begin{equation*}x^4-10x^3+35x^2-50x+24=0\end{equation*}$$ and use the algorithm for solving a general 4-th degree equation discovered by Lodovico Ferrari, an Italian mathematician, around 1540. The procedure is pretty involved, requires solving auxiliary 3-rd degree equations and other operations. Luckily it can be summarized in explicit formulae for the quartic roots, which we don't quote here due to restricted space. Having gone through this ordeal we get, for example, that one of the four roots is $$\begin{equation*}x_1=\frac{-1}{6} \sqrt{6} \sqrt{\frac{\left(\frac{1}{2}\right)^{\frac{2}{3}} \left(\left(\frac{1}{2}\right)^{\frac{2}{3}} \left(36\sqrt{-3}+70\right)^{\frac{2}{3}}+5\left(\frac{1}{2}\right)^{\frac{1}{3}} \left(36\sqrt{-3}+70\right)^{\frac{1}{3}}+13\right)}{\left(36\sqrt{-3}+70\right)^{\frac{1}{3}}}}-\frac{1}{2} \sqrt{\frac{-1}{3} \left(\frac{1}{2}\right)^{\frac{1}{3}} \left(36\sqrt{-3}+70\right)^{\frac{1}{3}}-\frac{\frac{26}{3} \left(\frac{1}{2}\right)^{\frac{2}{3}}}{\left(36\sqrt{-3}+70\right)^{\frac{1}{3}}}+\frac{10}{3}}+\frac{5}{2}\end{equation*}$$ which numerically evaluates to $x_1\approx 1.00 - 1.38\cdot 10^{-17}i$ and with some good faith we conclude $x_1=1$. We obtain the other three roots similarly.

Of course you wouldn't do it that way. You would exploit the fact that your equation already comes almost completely factored, the small quadratic part $x^2-3x+2$ is easy to factor as $(x-1)(x-2)$ with standard school math, so we have $$\begin{equation*}(x-1)(x-2)(x-3)(x-4)=0\end{equation*}$$ and we get all the solutions for free, exactly. The only reason we used the previous method is because we thought we first have to reduce the problem to some standard form and then use it as a black-box. The black-box, however, is unnecessarily complicated for what we need here, introduces noise, and ignores the additional structure we started with so a lot of wasted work is done to go back and forth.

You can read more about the twisted story of Ferrari and his mathematical duels over polynomial equations online, for instance in this article.


2. Suppose we want to optimize a portfolio with a factor covariance model of the form $\mathrm{diag}(D)+V^TFV$, with assets $x\in \mathbb{R}^n,\ (n\approx 3000)$, where $D\in \mathbb{R}^{n}$  is specific risk, $F\in \mathbb{R}^{k\times k},\ (k\approx 40)$ is factor covariance and $V\in \mathbb{R}^{k\times n}$ is the matrix of factor exposures. The straightforward approach is to multiply everything out, in pseudocode:

Q = diag(D) + V.T @ F @ V

and use the algorithm for optimizing a general quadratic problem:

risk = sqrt(quad_form(x, Q))

The procedure is pretty involved, requires checking that the matrix $Q\in \mathbb{R}^{n\times n}$ is positive-semidefinite, which at this size is prone to accumulation of numerical errors and can be time consuming (even though we know very well $Q$ is PSD by construction). Simultaneously the modeling tool and/or conic solver will need to compute some decomposition $Q=U^TU$. Having gone through this ordeal, the solver will finally have the risk in conic form

risk = norm(U@x)

albeit working with a dense matrix $U$ with $n\times n\approx 10^7$ nonzeros will not be optimal for efficiency and numerical reasons. We finally obtain the solution.

Of course you wouldn't do it that way. You would exploit the fact that your risk model already comes almost completely factored, the small factor covariance matrix $F\in \mathbb{R}^{k\times k}$ is easy to factor using standard Cholesky decomposition as $F=H^TH$, $H\in\mathbb{R}^{k\times k}$, and we get a conic decomposition of risk for free: $$\begin{equation*}x^T(\mathrm{diag}(D)+V^TFV)x=x^T(\mathrm{diag}(D)+V^TH^THV)x=\sum_i D_ix_i^2 + \|HVx\|_2^2,\end{equation*}$$ or in pseudocode:

risk = norm([diag(D^0.5); H @ V] @ x)

The matrix $HV\in \mathbb{R}^{k\times n}$ has $kn\approx  10^5$ nonzeros (a factor of $n/k\approx 100$ fewer than before) and everything else is very sparse so we get the solution very quickly and with high accuracy. The only reason we used the previous method is because we thought we first have to formulate a general quadratic problem and then solve it as a black-box. The black-box, however, is unnecessarily complicated for what we need here, introduces noise, and ignores the additional structure we started with so a lot of wasted work is done to go back and forth.

You can read more about the efficient conic implementation of the factor model in our Portfolio Optimization Cookbook, the documentation, a CVXPY tutorial and other resources.

Monday, February 26, 2024

New Portfolio Optimization Cookbook chapter and notebook

We are happy to share that our Portfolio Optimization Cookbook has been updated with a new chapter about regression. As always the new chapter is accompanied by a notebook with an illustrative code example implemented in MOSEK Fusion for Python.

Enjoy! 

Monday, February 19, 2024

New third party interfaces to MOSEK

Recently we added PyPSA and linopy as third party interfaces to MOSEK.

PyPSA stands for Python for Power System Analysis. It is pronounced "pipes-ah. It is a toolbox for simulating and optimizing modern power systems. 

PyPSA utilizes linopy to connect to solvers, such as MOSEK. But linopy can also be used on its own. It is designed to be a link between data analysis frameworks, such as pandas and xarray, and optimization solvers, like MOSEK.  

A full list of available third party interfaces to MOSEK can be seen here. If you feel like the list is not complete, whether it is an existing interface that is missing or an interface that you wished existed, let us know! This is easiest done by contacting support@mosek.com.  

Monday, February 12, 2024

NEOS server and Mosek

MOSEK offers free licenses to academics as part of our academic initiative. This also extends to academic use of MOSEK through the NEOS server

The NEOS server is a great initiative and service hosted by Wisconsin Institute for Discovery at the University of Wisconsin in Madison. The NEOS server is a free internet-based service for solving numerical optimization problems. We at MOSEK are proud to one of the 60 supported solvers. 

Wednesday, January 24, 2024

New prices from June 2024

The new prices will come into effect on June 1st, 2024. 

The price for the basic PTS and PTON floating licenses increases with 100 USD each. Our other prices follows accordingly. With NODE licenses costing 4 times the price of their floating license counterpart and the annual maintenance 25% of the base price of the part.

This equates to a price increase of 5.1% on average.

The new prices can be found at the top of our commercial pricing page on our website.


Thursday, January 18, 2024

Seminar at LiU

 On Monday (January 22) MOSEK CEO and chief scientist Erling Dalgaard Andersen will be giving a talk at Linköping University. The talk will, maybe unsurprisingly, be about conic optimization.

The seminar will take place at 13.15 in Hopningspunkten (B-huset) on the campus of LiU. It is open for everybody who are interested :)

Monday, January 15, 2024

Happy New Year!

We here at MOSEK have had a good start to the new year and we hope the same is true for all of you!

For those who have had a less than optimal start to the year we would like to share some resources that might bring you on to the fast path to optimum.

We know many users of MOSEK use it for portfolio optimization, in light of that we gladly recommend the recently published paper Markowitz Portfolio Construction at Seventy

Further there is now an unofficial package for CVX with native Apple Silicon support. This package due to the work of Dr. Wenyuan (Mike) Wang and MOSEKs own Michal Adamaszek with permission from Michael C. Grant.

We hope these resources can be of use and that you will have a great 2024!