Closed Form Solutions in Nuclear Norm Related Optimization and Around

Please refer to the new version of this article for some revisions.

The nuclear norm has been on the spotlight of optimization for quite a while, largely due to the breakthrough work of Emmanuel Candeson exact matrix completion and recovery years back. Summing up the singular values, this norm can be considered as the {\ell_1} norm in the spectral domain, reminiscent of the intriguing relationship between the {\ell_0} (quasi)norm and the {\ell_1} norm, revealed in compressive sensing. In fact, this norm best approximates the rank for a given matrix under some technical conditions (aka. serving as the convex surrogate). To be precise, for a particular matrix {\mathbf{A}}, its nuclear norm is defined as {\|\mathbf{A}\|_* = \sum_i \sigma_i}, where {\sigma_i} denotes the {i^{th}} singular value (henceforth assuming singular values sorted in descending order). Then {\|\mathbf{A}\|_*} is the best under-estimator (technically the convex envelope) of {\textrm{rank} \left(\mathbf{A}\right)} over the domain {\|\mathbf{A}\| \leq 1}, where {\| \cdot\|} is the spectral norm or equivalently the largest singular value {\sigma_1}. The last condition on the spectral norm is in fact trivial, as one can always scale the matrix to make {\sigma_1} valid (in practical optimization, this possibly entails adjusting of parameters).

Most nuclear norm related (convex) optimization can be recast as SemiDefinite Programming problems, due to the simple variational characterization of the nuclear norm (see Proposition 2.1 in this paper)

\displaystyle \min_{\mathbf{W}_1, \mathbf{W}_2} \quad \frac{1}{2}\left(\mathrm{trace} \left(\mathbf{W}_1\right)+ \mathrm{trace} \left(\mathbf{W}_2\right) \right), \qquad \mathrm{s.t.} \quad \begin{bmatrix} \mathbf{W}_1 & \mathbf{X} \\ \mathbf{X} & \mathbf{W}_2 \end{bmatrix} \succeq \mathbf{0}. \ \ \ \ \ (1)

Despite the elegance, this approach is numerically unwise as most interior-point solvers can only handle problems up to hundreds of variables both for hardware and algorithmic reasons. On the other hand, most current applications such as large-scale matrix completion and the Robust PCA (by Cand\`{e}s et al) have been numerically feasible due to the closed-form solutions to some canonical forms of nuclear norm optimization (most popular and foremost the singular value thresholding (SVT) operator). These line of results has served as the key to success of recent numerical schemes such as the accelerated proximal gradient (APG) and the augmented Lagragian multiplier(ALM) methods , for nuclear norm optimization. Hence discovery and hopefully systematic investigation into such closed-form solutions are both necessary and interesting in their own right.

— 1. Realizing the importance of unitarily invariant property (UIP) —

My recent investigation into the low rank representation for subspace segmentation (ref. my earlier post here and this paper) has boiled down to understanding this problem

\displaystyle \min_{\mathbf{Z}} \quad \| \mathbf{Z}\|_*, \qquad \mathrm{s.t. } \quad \mathbf{XZ} = \mathbf{X} \ \ \ \ \ (2)

and it turns out

Theorem 1 The minimizer {\mathbf{Z}^*} to problem (2)obeys {\mathbf{Z}^* = \mathbf{VV}^\top}, where {\mathbf{X} = \mathbf{U\Sigma V}^\top} is the singular value decomposition of {\mathbf{X}}. Moreover, {\mathbf{Z}^*} is also the minimizer to this problem:

\displaystyle \min_{\mathbf{Z}} \quad \| \mathbf{Z}\|_*, \qquad \mathrm{s.t. } \quad \mathbf{XZ} = \mathbf{X} , \mathbf{Z}\succeq \mathbf{0}. \ \ \ \ \ (3)

While these results bear some direct links with what comes later, they have provided us with important clues to investigate the above closed-form problem. There it is. When trying to model practical data points contaminated with outliers, we consider this model

\displaystyle \min_{\mathbf{Z, E}} \quad \| \mathbf{Z}\|_* + \lambda \| \mathbf{E}\|_{2, 1}, \qquad \mathrm{s.t. } \quad \mathbf{XZ} +\mathbf{E} = \mathbf{X} , \mathbf{Z}\succeq \mathbf{0}. \ \ \ \ \ (4)

And numerical tackle to this model involves the subproblem

\displaystyle \min_{\mathbf{Z}} \quad \| \mathbf{Z}\|_* + \frac{\mu}{2}\| \mathbf{X - Z}\|_F^2, \qquad \mathrm{s.t. } \quad \mathbf{Z} \succeq \mathbf{0}. \ \ \ \ \ (5)

To establish the closed-form solution to this problem, it turns out the unitarily invariant property of the nuclear norm is critical.

Definition 2 (Unitarily Invariant Norms) For {\mathbf{X} \in \mathcal{C} ^{m\times n}}, and unitarily matrices {\mathbf{U}} and {\mathbf{V}} (aka. {\mathbf{UU^*} = \mathbf{U^*U} = \mathbf{I}}, and similar for {\mathbf{V}}). A norm {\|\cdot\|_{\ell}} is unitarily invariant if and only if {\|\mathbf{X}\|_\ell = \|\mathbf{UXV}\|_\ell} for any {\mathbf{U}} and {\mathbf{V}}. ({\mathbf{U}^*} here denotes conjugate transpose over the complex matrix field)

Two families of unitarily invariant norms are particularly relevant here. One is the Schattern {p} norms defined as {\|\mathbf{X}\|_{\mathrm{sp}} = \left(\sum_{i=1}^r \sigma_i^p\right)^{1/p}} (assuming {\mathrm{rank}\left(\mathbf{X}\right) = r}); the other is the Ky-Fan {k} norms, defined as {\|\mathbf{X}\|_{\mathrm{kk}} = \sum_{i=1}^k \sigma_i } (assuming descending order of singular values). Clearly the nuclear norm is the Schatten {1} norm (or Ky-Fan {r} norm), and hence unitarily invariant. Similarly the Frobenius norm is the Schatten {2} norm and hence also enjoys the nice property.

— 2. Deriving closed-form solutions of nuclear norm problems by unitarily invariant property —

I have progressively realized the UIP can be used for figuring out closed form solutions to several canonical forms of nuclear norm optimization problem. Though results presented below are known from various authors, I don’t like their proofs in general because their proofs are by construction in nature — it is beautiful but not generalizable nor stimulating, esp. to complicated forms and further problems.

Before the exposition, I’d like to briefly review some facts from convex analysis and optimization (ref. Boyd and Vandenberghe). A real-valued function {f\left(x\right)} is convex if and only if its domain {\mathrm{dom}\left(f\right)} is a convex set and {f\left(\alpha x_1 +\left(1-\alpha\right)x_2\right) \leq \alpha f\left(x_1\right) + \left(1-\alpha\right)f\left(x_2\right)} for {x_1 \neq x_2 \in \mathrm{dom}\left(f\right)} and {0\leq\alpha\leq 1} (the description can be considerably simplified with the extended-value extensions); it’s strictly convex if and only if the inequality always holds strictly. Now consider minimizing a convex function {f\left(x\right)} in practice, or generically

\displaystyle \min. \quad f\left(x\right), \qquad \textrm{s.t.} \quad x\in \mathcal{C}, \ \ \ \ \ (6)

where {\mathcal{C}} is the convex domain that’s dictated by constraints. Suppose {\mathcal{C}} is nonempty, aka. the problem is feasible, then

Lemma 3 (Convexity and Uniqueness of Minimizers) For the generic convex optimization (6)with a nonempty domain {\mathcal{C}}, we have

  • 1) For the additive composite function {g\left(x\right) = \sum_{i=1}^k w_i f_i\left(x\right)}, {x\in \mathcal{C}, w_i >0} with convex components, {g\left(x\right)}, {x\in \mathcal{C}} is strictly convex so long as any component function is strictly convex.
  • 2) If {f\left(x\right)} is strictly convex, the minimizer to (6) is always unique.
  • 3) When {f\left(x\right)} is not strictly convex, the minimizer(s) to (6) may or may not be unique.

The first and second can be easily verified from first principle of strict convexity, while the third can be loosely cemented with examples, esp on why strict convexity is not a necessary condition for the uniqueness. Take {f\left(x\right)=\| x\|_1, x \in \mathcal{R}^n} for example. The objective is obviously not strictly convex (in fact, local linearity is a typical kind of antidote for thus), but the minimizer is only the origin (it looks as if it was strictly convex in the {\epsilon} vicinity of the origin). There are numerous reasons for which such unicity can arise and I won’t push towards enumerating them; instead, I’d like to note that a useful technique to prove such unicty is by perturbation analysis: for a candidate minimizer of (6), say {x_\diamond}, it is the unique minimizer if and only if {f\left(x_\diamond +\delta x\right) > f\left(x_\diamond\right)} where {x_\diamond +\delta x \in \mathcal{C}}. This is justified since for convex functions, any local minimizer is also a global minimizer while the convexity of domain suggests such minimizers must be spatially contiguous.

Now I’ll close the loop by presenting the UIP version of sketchy proofs to several known results on nuclear norm.

Theorem 4 (Singular Value Thresholding and EigenValue Thresholding)We have the following analytic results

  • 1) The minimizer to

    \displaystyle \min_{\mathbf{X}} \quad \mu \|\mathbf{X}\|_* + \frac{1}{2}\|\mathbf{ Y - X}\|_F^2 \ \ \ \ \ (7)

    obeys {\mathbf{X} = \mathbf{U}\mathrm{sign}\left(\mathbf{\Lambda}\right)\odot\left(|\mathbf{\Lambda}| - \mu \mathbf{I}\right)\mathbf{V}^\top}, for SVD of {\mathbf{Y}} as {\mathbf{Y} = \mathbf{U\Lambda V}^\top}, where operations for {\mathrm{sign}\left(\mathbf{\Lambda}\right)\odot\left(|\mathbf{\Lambda}| - \mu \mathbf{I}\right)} are taken component-wise.

  • 2) For symmetric matrix {\mathbf{Y}}, the minimizer to

    \displaystyle \min_{\mathbf{X}\succeq \mathbf{0}} \quad \mu \|\mathbf{X}\|_* + \frac{1}{2}\|\mathbf{ Y - X}\|_F^2 \ \ \ \ \ (8)

    obeys {\mathbf{X} = \mathbf{U} \max\left(\mathbf{\Lambda} - \mu \mathbf{I}, \mathbf{0}\right)\mathbf{U}^\top}, for eigenvalue decomposition of {\mathbf{Y}} as {\mathbf{Y} = \mathbf{U\Lambda U}^\top}.

  • 3) For general square matrix {\mathbf{Y}}, the minimizer to

    \displaystyle \min_{\mathbf{X}\succeq \mathbf{0}} \quad \mu \|\mathbf{X}\|_* + \frac{1}{2}\|\mathbf{Y - X}\|_F^2 \ \ \ \ \ (9)

    obeys {\mathbf{X} = \mathbf{U} \max\left(\mathbf{\Lambda} - \mu \mathbf{I}, \mathbf{0}\right)\mathbf{U}^\top}, for eigenvalue decomposition of {\widetilde{\mathbf{Y}} = \frac{1}{2}\left(\mathbf{Y} +\mathbf{Y}^\top\right)} as {\widetilde{\mathbf{Y}} = \mathbf{U\Lambda U}^\top}.

Proof: 1) (7) is strongly convex and hence has a unique minimizer. It is not hard to see by UIP of Schatten norms that

\displaystyle \min_{\mathbf{X}} \quad \mu \|\mathbf{X}\|_* + \frac{1}{2}\|\mathbf{ Y - X}\|_F^2 \Longleftrightarrow \min_{\mathbf{X}} \quad \mu \|\mathbf{U}^\top\mathbf{XV}\|_* + \frac{1}{2}\|\mathbf{ \Lambda} - \mathbf{U}^\top \mathbf{XV}\|_F^2, \ \ \ \ \ (10)

where the right hand side is minimized only when {\mathbf{\Sigma} = \mathbf{U}^\top \mathbf{XV}} is diagonal. The last assertion comes from the facts that for any matrix {\mathbf{A}} decomposed into its diagonal and off-diagonal elements, {\mathbf{A = \Sigma + B}}, we have {\| \mathbf{\Sigma + B}\|_* \geq \|\mathbf{\Sigma}\|_*} (generalized from Lemma 13 in this paper; also known in quantum information theory as (a particular case of) the pinching equality — see this slides) and {\| \mathbf{\Sigma + B}\|_F^2 \geq \|\mathbf{\Sigma}\|_F^2}, where the equality holds only when {\mathbf{B = 0}} (the statement is wrong, there are counter-examples! Consider {\begin{bmatrix} 2 & 2 \\ 2& 2\end{bmatrix}} for example). So the problem to solve reduces to seeking {\mathbf{\Sigma}_\diamond = \mathop{\arg\min}_{\mathbf{\Sigma}} \, \mu \|\mathbf{\Sigma}\|_* + \frac{1}{2}\|\mathbf{ \Lambda} - \mathbf{\Sigma}\|_F^2} (solved by simple component-wise arguments similar to that of soft-thresholding for the Lasso problem), upon which the optimal variable is {\mathbf{X}_\diamond = \mathbf{U\Sigma}_\diamond \mathbf{V}^\top}.
2) On top of the above, the key is to verify {\mathbf{X}\succeq \mathbf{0} \Longleftrightarrow \mathbf{U}^\top \mathbf{X}\mathbf{U} \succeq \mathbf{0}} for orthogonal {\mathbf{U}} (convention here: “orthogonal” means {\mathbf{U}^\top \mathbf{U} = \mathbf{U}\mathbf{U}^\top =\mathbf{I}}), for which both directions are easy by the definition of positive semi-definiteness.
3)Observing that {\| \mathbf{Y -X} \|_F^2 = \|\mathbf{Y}^\top - \mathbf{X}\|_F^2} for symmetric {\mathbf{X}} (implied from {\mathbf{X}\succeq \mathbf{0}}), one can easily verify that {\|\mathbf{Y-X}\|_F^2 + \|\mathbf{Y}^\top - \mathbf{X} \|_F^2 = 2 \|\frac{\mathbf{Y} +\mathbf{Y}^\top}{2} - \mathbf{X}\|_F^2}. So the objective is equivalent to {\min_{\mathbf{X}\succeq \mathbf{0}} \; \mu \|\mathbf{X}\|_* + \frac{1}{2}\|\frac{\mathbf{Y} +\mathbf{Y}^\top}{2} - \mathbf{ X}\|_F^2}. \Box

As mentioned, the above results are critical to devising proximal point kind of numerical algorithms (and also alternating direction method) for a large family of optimization problems involving the nuclear norm. Recently this paper working on subspace segmentation has a nice take into this slightly more “interesting” version of closed form solution (though the authors have failed to verified the uniqueness of the solution before they used the term “the optimal solution”, also their proof is by construction — less favorable to us as discussed).

Theorem 5 The optimal solution to the unconstrained optimization

\displaystyle \min_{\mathbf{Z}} \quad \|\mathbf{Z}\|_* + \frac{\tau}{2}\|\mathbf{ X- XZ}\|_F^2 \ \ \ \ \ (11)

is

\displaystyle \mathbf{Z}_\diamond = \mathbf{V}\max\left(\mathbf{0}, \mathbf{I} - \frac{1}{\tau} \mathbf{\Lambda}^{-2}\right)\mathbf{V}^\top, \ \ \ \ \ (12)

where {\mathbf{X} = \mathbf{U\Lambda V}^\top} is the SVD of {\mathbf{X}} and we have assumed {\frac{1}{0} = \infty} and {\infty^2 = \infty} for convenience. Furthermore {\mathbf{Z}_\diamond \succeq \mathbf{0}}.

Before moving into the actual proof, we note that the form of {\mathbf{Z}_\diamond} is reminiscent of the solution to (2) and (3). Indeed the optimization problems are closely related — the difference lies at if the equality {\mathbf{X = XZ}} is enforced through a constraint or a soft loss term. Besides though {f\left(\mathbf{Z}\right) = \| \mathbf{Z}\|_F^2} are strictly convex w.r.t. {\mathbf{Z}}, we can only show that {g\left(\mathbf{Z}\right) = \| \mathbf{X-XZ} \|_F^2} is convex w.r.t. {\mathbf{Z}} as follows. For {0\leq \alpha \leq 1}, we have

\displaystyle \begin{array}{rcl} & & \| \mathbf{X}-\alpha \mathbf{X} \mathbf{Z}_1 - \left(1-\alpha\right)\mathbf{X} \mathbf{Z}_2 \|_F^2 \\ & = &\| \alpha \left( \mathbf{X}- \mathbf{X} \mathbf{Z}_1\right) + \left(1-\alpha\right) \left( \mathbf{X}- \mathbf{X} \mathbf{Z}_2\right) \|_F^2 \\ & = & \alpha^2 \| \mathbf{X}- \mathbf{X} \mathbf{Z}_1 \|_F^2 + \left(1-\alpha\right)^2 \| \mathbf{X}- \mathbf{X} \mathbf{Z}_2 \|_F^2 + 2\alpha \left(1-\alpha\right) \langle \mathbf{X}- \mathbf{X} \mathbf{Z}_1, \mathbf{X}- \mathbf{X} \mathbf{Z}_2 \rangle \\ & \leq & \alpha^2 \| \mathbf{X}- \mathbf{X} \mathbf{Z}_1 \|_F^2 + \left(1-\alpha\right)^2 \| \mathbf{X}- \mathbf{X} \mathbf{Z}_2 \|_F^2 + 2\alpha \left(1-\alpha\right) \| \mathbf{X}- \mathbf{X} \mathbf{Z}_1\|_F \| \mathbf{X}- \mathbf{X} \mathbf{Z}_2\|_F \\ & \leq & \alpha \| \mathbf{X}- \mathbf{X} \mathbf{Z}_1 \|_F^2 + \left(1-\alpha\right) \| \mathbf{X}- \mathbf{X} \mathbf{Z}_2 \|_F^2, \end{array}

where the last inequality follows from the observation that {\forall m, n \geq 0},

\displaystyle \alpha^2 m^2 + \left(1-\alpha\right)^2 n^2 + 2\alpha\left(1-\alpha\right) mn -\alpha m^2 -\left(1-\alpha\right) n^2 = -\alpha(1-\alpha) \left(m-n\right)^2 \leq 0,

and {\langle \cdot, \cdot\rangle} denotes matrix inner product. Notice that for the last inequality, the equality holds only when {m = n}, or back to our complex form, {\| \mathbf{X}- \mathbf{X} \mathbf{Z}_1 \|_F = \| \mathbf{X}- \mathbf{X} \mathbf{Z}_2 \|_F}. On the other hand, for the inner product to norm product inequality in line 3 to 4, obviously the equality is taken only when {\mathbf{X-XZ}_1 = \lambda \left( \mathbf{X-XZ}_2\right)} for a non-zero constant {\lambda} or any taking {\mathbf{0}}. Altogether, the equality is achieved only when {\mathbf{X-XZ}_1 = \mathbf{X-XZ}_2 \Longrightarrow \mathbf{XZ}_1 =\mathbf{XZ}_2}. This provides an important hint on when {g\left(\mathbf{Z}\right)} is strictly convex:

Proposition 6 For a given matrix {\mathbf{X}}, if {\mathbf{XZ}_1 \neq\mathbf{XZ}_2 ~\forall \mathbf{Z}_1 \neq \mathbf{Z}_2}, {g\left(\mathbf{Z}\right)} is strictly convex. In other words, {g\left(\mathbf{Z}\right)} is strictly convex when {\mathbf{X}} is injective, or equivalently {\mathbf{X}} has full column rank.

So far it’s clear that the objective in (11) is in general not strictly convex, and hence the optimization may not yield unique minimizer. The following deferred proof shows otherwise positive.

Proof: , sketch} We assume the full SVD of {\mathbf{X}} as {\mathbf{X} =\mathbf{U\Lambda V}^\top}, implying {\mathbf{UU}^\top = \mathbf{U}^\top \mathbf{U} = \mathbf{I}} and {\mathbf{VV}^\top = \mathbf{V}^\top \mathbf{V} = \mathbf{I}}. By the UIP of Schattern norms, it’s easy to see {\|\mathbf{X-XZ}\|_F^2 =\|\mathbf{U}^\top \left(\mathbf{X-XZ}\right)\mathbf{V}\|_F^2 = \|\mathbf{\Lambda} - \mathbf{\Lambda V}^\top \mathbf{ZV}\|_F^2}, and hence the objective can be written as { \|\mathbf{V}^\top \mathbf{Z} \mathbf{V}\|_*+ \frac{\tau}{2}\|\mathbf{\Lambda} - \mathbf{\Lambda V}^\top \mathbf{ZV}\|_F^2 = \|\mathbf{D}\|_*+ \frac{\tau}{2}\|\mathbf{\Lambda} - \mathbf{\Lambda D}\|_F^2} with the substitution {\mathbf{D} = \mathbf{V}^\top \mathbf{ZV}}. For the diagonal (but non-vanishing) {\mathbf{\Lambda}} and a positive {\tau}, it’s not hard to see that { \|\mathbf{D}\|_*+ \frac{\tau}{2}\|\mathbf{\Lambda} - \mathbf{\Lambda D}\|_F^2} is minimized only when {\mathbf{D}} is also diagonal (by the pinching equality for nuclear norm as mentioned above and also property for squared Frobenius norm). The diagonal elements can be uniquely determined from the corresponding scalar optimization problems. For invertible {\mathbf{V}}, we must have {\mathbf{Z} =\mathbf{VDV}^\top}. The positive semi-definiteness can be easily established as the diagonal matrix {\mathbf{D}_\diamond = \max \left(\mathbf{0}, \mathbf{I} - \frac{1}{\tau}\mathbf{\Lambda}^{-2}\right) \geq \mathbf{0}}, i.e., is component-wise nonnegative.

Uniqueness. The uniqueness can be seen from two different aspects.First, perhaps the most simple, we notice that the only source of ambiguity for {\mathbf{Z}_\diamond} comes from the non-uniqueness of {\mathbf{V}} in the SVD. But we know from the above argument that the minimizer(s) has the form {\mathbf{Z}_\diamond = \mathbf{MDM}^\top} (for clarity, we switch to {\mathbf{M}} for a general orthogonal matrix). The form of {\mathbf{D}_\diamond} indicates that only those singular vectors corresponding to singular values that are greater than {1/\sqrt{\tau}} matter, i.e., {\mathbf{Z}_\diamond = \sum_{\sigma_i > 1/\sqrt{\tau}} d_{\sigma_i} \mathbf{m}_i\mathbf{m}_i^\top = \mathbf{M}_\tau \mathbf{D}_\tau \mathbf{M}_\tau^{\top}}, where we have partitioned the matrices according to the threshold. However we must have {\mathbf{M}_\tau = \mathbf{V}_\tau \mathbf{R}_\mathbf{M}} for a certain rotation matrix {\mathbf{R}_\mathbf{M}} that represents the in-plane rotation of the subspace basis {\mathbf{V}_\tau} and {\mathbf{Z}_\diamond = \mathbf{M}_\tau \mathbf{D}_\tau \mathbf{M}_\tau^{\top} = \mathbf{V}_\tau\left( \mathbf{R}_\mathbf{M} \mathbf{D}_\tau \mathbf{R}_\mathbf{M}^\top\right) \mathbf{V}_\tau^\top = \mathbf{V}_\tau\mathbf{D}_\tau \mathbf{V}_\tau^\top}. The second way goes as follows. Suppose there is another minimizer in the form {\mathbf{Z}_* = \widetilde{\mathbf{V}} \mathbf{D}_\diamond \widetilde{\mathbf{V}}^\top}, we must have the perturbation relationship {\widetilde{\mathbf{V}} \mathbf{D}_\diamond \widetilde{\mathbf{V}}^\top = \mathbf{V}\mathbf{D}_\diamond \mathbf{V}^\top + \mathbf{H} }. It follows

\displaystyle \begin{array}{rcl} \|\mathbf{H}\|_F^2 & =& \| \widetilde{\mathbf{V}} \mathbf{D}_\diamond \widetilde{\mathbf{V}}^\top - \mathbf{V}\mathbf{D}_\diamond \mathbf{V}^\top\|_F^2 = \|\widetilde{\mathbf{V}} \mathbf{D}_\diamond \widetilde{\mathbf{V}}^\top\|_F^2 + \| \mathbf{V}\mathbf{D}_\diamond \mathbf{V}^\top\|_F^2 - 2\langle \widetilde{\mathbf{V}} \mathbf{D}_\diamond \widetilde{\mathbf{V}}^\top, \mathbf{V}\mathbf{D}_\diamond \mathbf{V}^\top\rangle \\ & = & 2\|\mathbf{D}_\diamond\|_F^2 - 2 \langle \mathbf{D}_\diamond, \mathbf{D}_\diamond \rangle = 0, \end{array}

by which we conclude {\mathbf{H} = \mathbf{0}}. \Box

The above proof has verified Lemma 1 in this paper. Lemma 2 and 3 therein are built on the results of Lemma 1, and the uniqueness of the respective solution can also be established by similar perturbation argument as shown above.

— 3. Connection to other work and generalization —

Recently unitarily invariant norms used as regularizers in machine learning have been touched on in Ref [2]. This paper has in particular focused on characterize optimal solutions to such spectral regularization problems. In addition, Ref [1] shares similar flavors and talks about unitarily invariant functions as well . These techniques draws heavily from matrix analysis, and hence the seminal paper Ref [3] proves to be extremely useful here. Besides, the very fresh monograph Ref [4] on matrix-valued functions are also pertinent in this line (e.g., refer to this paperto see how some recent matrix cone programming are tackled).

\displaystyle \spadesuit \quad \spadesuit \quad \spadesuit \quad \spadesuit \quad \spadesuit

— Further readings (References) —

  1. Lu, Z.S. and Zhang, Y. Penalty Decomposition Methods for Rank Minimization. Submitted to arXiv.org.
  2. Argyriou, A., Micchelli, C., Pontil, M. On Spectral Learning. Journal of machine learning research, 11(Feb):935 — 953, 2010
  3. Lewis, AS. The Convex Analysis of Unitarily Invariant Matrix Functions. Journal of Convex Analysis, 1995.
  4. Higham, Nick. Matrix Functions: Theory and Algorithms. Cambridge university press, 2008.

[Update Aug 12 2011] Refer to this paper also for similar discussions. (Citation: Yu, YL and Schuurmans. Rank/Norm Regularization with Closed-Form Solutions: Application to Subspace Clustering. In UAI 2011).

[Update Jan 25 2012] I repeatedly encountered situations where variational characterization of nuclear norm could be useful. One piece of result, for example, is {\|\mathbf{X}\|_* = \min_{\mathbf{X=UV}^\top} \|\mathbf{U}\|_F \|\mathbf{V}\|_F = \min_{\mathbf{X=UV}^\top} \frac{1}{2} \left(\|\mathbf{U}\|_F^2 + \|\mathbf{V}\|_F^2\right)} (this can be found in this paper without proof). To see this is true, one can set {\mathbf{U} = \mathbf{L\Sigma}^{1/2}} and {\mathbf{V} = \mathbf{\Sigma}^{1/2} \mathbf{R}^\top} where {\mathbf{X} = \mathbf{L\Sigma R}^\top} is the SVD of {\mathbf{X}}, suggesting {\|\mathbf{X}\|_* \geq \min_{\mathbf{X=UV}^\top} \frac{1}{2} \left(\|\mathbf{U}\|_F^2 + \|\mathbf{V}\|_F^2\right) \geq \min_{\mathbf{X=UV}^\top} \|\mathbf{U}\|_F \|\mathbf{V}\|_F}. On the other hand, the Cauchy-Schwarz inequality for unitarily invariant norms tells { \|\mathbf{X}\|_* = \|\mathbf{UV}^\top\|_* \leq \|\mathbf{U}\|_F \|\mathbf{V}\|_F} (see, e.g., Ex IV 2.7 in Matrix Analysis by Bhatia).

Tagged , , , , , , ,

4 thoughts on “Closed Form Solutions in Nuclear Norm Related Optimization and Around

  1. Anonymous says:

    thank u very much.

  2. Anonymous says:

    Great Summary!

  3. Charanpal says:

    Thanks for this excellent page, particularly the proof of Theorem 4 which helps a lot for a paper I’m writing!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: