1. Please prove that the following sets are convex:
1) S = {x ∈ Rm | | p(t) |≤ 1 for |t| ≤ π/3}, where p(t) = x1 cost + x2 cos 2t + · · · +
xm cos mt. (5 pts)2) (Ellipsoids)
n
x|
p
(x − xc)
T P(x − xc) ≤ r
o
(xc ∈ R
n, r ∈ R, P ⪰ 0). (5 pts)3) (Symmetric positive semidefinite matrices) S
n×n
+ =
n
P ∈ S
n×n|P ⪰ 0
o
. (5
pts)
4) The set of points closer to a given point than a given set, i.e.,
n
x | ∥x − x0∥2 ≤ ∥x − y∥2 for all y ∈ S
o
,
where S ∈ Rn. (5 pts)2. (15 pts) For a given norm ∥ · ∥ on Rn, the dual norm, denoted ∥ · ∥∗, is defined as
∥y∥∗ = sup
x∈Rn
{y
T x | ∥x∥ ≤ 1}.Show that the dual of Euclidean norm is the Euclidean mom, i.e., supx∈Rn {z
T x | ∥x∥2 ≤
1} = ||z||2.3. (15 pts) Define a norm cone as
C ≡
(x, t) : x ∈ R
d
, t ≥ 0, ∥x∥ ≤ t⊆ R
d+1
Show that the norm cone is convex by using the definition of convex sets.4. (18 pts) Let C ⊂ R
n be convex and f : C → R⋆
. Show that the following statements are
equivalent:
(a) epi(f) is convex.
(b) For all points xi ∈ C and {λi
|λi ≥ 0,
Pn
i=1 λi = 1, i = 1, 2, · · · , n}, we have
f
Xn
i=1
λixi
≤
Xn
i=1
λif(xi).(c) For ∀x, y ∈ C and λ ∈ [0, 1],
f
(1 − λ)x + λy
≤ (1 − λ)f(x) + λf(y).5. (14 pts) Monotone Mappings. A function ψ : Rn → Rn is called monotone if for all
x, y ∈ domψ,
(ψ(x) − ψ(y))T
(x − y) >= 0. (1)Suppose f : Rn → Rn is a differentiable convex function. Show that its gradient ∇f is
monotone. Is the convex true, i.e., is every monotone mapping the gradient of a convex
function?6. (18 pts) Please determine whether the following functions are convex, concave or none of
those, and give a detailed explanation for your choice.
1)
f1(x1, x2, · · · , xn) = (
−(x1x2 · · · xn)
1
n , if x1, · · · , xn > 0
∞ otherwise;
2) f2(x1, x2) = x
α
1 x
1−α
2
, where 0 ≤ α ≤ 1, on R
2
++;
3) f3(x, u, v) = − log(uv − x
T x) on domf = {(x, u, v)|uv > xT x, u, v > 0}.In the lecture, we have learned about robust
linear programming as an application of second-order cone programming. Now we will
consider a similar robust variation of the convex quadratic program
minimize (1/2)x
T P x + q
T x + r
subject to Ax ⪯ b.For simplicity, we assume that only the matrix P is subject to errors, and the other parameters (q, r, A, b) are exactly known. The robust quadratic program is defined as
minimize supP ∈E
(1/2)x
T P x + q
T x + r
subject to Ax ≺ b
where E is the set of possible matrices P.For each of the following sets E, express the robust QP as a convex problem in a standard
form (e.g., QP, QCQP, SOCP, SDP).
(a) A finite set of matrices: E = {P1, …, PK}, where Pi ∈ S
n
+, i = 1, …, K.
(b) A set specified by a nominal value P0 ∈ S
n
+ plus a bound on the eigenvalues of the
deviation P − P0:
E = {P ∈ S
n
| −γI ⪯ P − P0 ⪯ γI}
where γ ∈ R and P0 ∈ S
n
+.
(c) An ellipsoid of matrices:
E =
(
P0 +
X
K
i=1
Piui
| ∥u∥2 ≤ 1
)
.You can assume Pi ∈ S
n
+, i = 0, . . . , K.Please consider the convex optimization problem and calculate its
solution
minimize −
Pn
i=1 log (αi + xi)
subject to x ⪰ 0, 1
T x = 1,1. (50 pts) L-smooth functions. Suppose the function f : R
n → R is convex and differentiable. Please prove that the following relations holds for all x, y ∈ R if f with an
L-Lipschitz continuous conditions,
[1] ⇒ [2] ⇒ [3]
[1] ⟨∇f(x) − ∇f(y), x − y⟩ ≤ L∥x − y∥
2
,
[2] f(y) ≤ f(x) + ∇f(x)
T
(y − x) + L
2
∥y − x∥
2
,
[3] f(y) ≥ f(x) + ∇f(x)
T
(y − x) + 1
2L
∥∇f(y) − ∇(x)∥
2
, ∀x, y,2. (50 pts) Backtracking line search. Please show the convergence of backtracking line
search on a m-strongly convex and M-smooth objective function f as
f
x
(k)
− p
⋆ ≤ c
k
f
x
(0)
− p
⋆
where c = 1 − min{2mα, 2βαm/M} < 1.For each of the following convex functions, compute the proximal operator proxf
.
(1) (10 pts) f(x) = λ∥x∥1, where x ∈ R
d and λ ∈ R+ is the regularization parameter.(2) (20 pts) f(X) = λ∥X∥∗, where X ∈ R
d×m is a matrix, ∥X∥∗ denotes the nuclear norm,
and λ ∈ R+ is the regularization parameter.
2 Alternating Direction Method of Multipliers
(35 pts) Consider the following problem.
minimize − log det X + Tr(XC) + ρ∥X∥1
subject to X ⪰ 0
(1)In (1), ∥ · ∥1 is the entrywise ℓ1-norm. This problem arises in estimation of sparse undirected
graphical models. C is the empirical covariance matrix of the observed data. The goal is to
estimate a covariance matrix with sparse inverse for the observed data. In order to apply ADMM
we rewrite (1) as
minimize − log det X + Tr(XC) + IX⪰0(X) + ρ∥Y ∥1
subject to X = Y
(2)
where IX⪰0(·) is the indicator function associated with the set X ⪰ 0. Please provide the ADMM
update (the derivation process is required) for each variable at the t-th iteration.3 Monotone Operators and Base Splitting Schemes
(35 pts) Proof the theorem below:
Theorem 1. For v ∈ R
n, the solution of the equation
u
∗ = (I − JW)
−T
v (3)
is given by
u
∗ = v + WT u˜
∗
(4)
where I is the identity matrix and u˜
∗
is a zero of the operator splitting problem 0 ∈ (F + G)(u
∗
),
with operators defined as
F(˜u) = (I − WT
)(˜u), G(˜u) = Du˜ − v (5)
where D is a diagonal matrix defined by J = (I + D)
−1
(where Jii > 0).(Hint-1, please refer to Monotone Operators-note.pdf)
(Hint-2, I = (I − JW)
−T
(I − JW)
T
)
Convex, Homework, Optimization, SI251, solutions
[SOLVED] Si251 – convex optimization homework 1 to 4 solutions
$25
File Name: Si251_____convex_optimization_homework_1_to_4_solutions.zip
File Size: 518.1 KB
Reviews
There are no reviews yet.