AN INTRODUCTION TO OPTIMIZATION
SOLUTIONS MANUAL
Fourth Edition
EdwinK.P.ChongandStanislawH.Z ak
A JOHN WILEYSONS, INC., PUBLICATION
1. Methods of Proof and Some Notation
1.1
1.2
1.3
1.4
1.5
A B AB not Bnot A FFTT FTTT TFFF TTTT
A B AB not A and not B FFTT FTTT TFFF TTTT
not A not B
TT TF FT FF
not A not B
TT TF FT FF
not A and B
not A not B
T T T F
TT TF FT FF
A FFT FTT TFT TTF
A B AandBorAandnotB FFF FTF TFT TTT
B not A or not B
AandB AandnotB
FF FF FT TF
The cards that you should turn over are 3 and A. The remaining cards are irrelevant to ascertaining the truth or falsity of the rule. The card with S is irrelevant because S is not a vowel. The card with 8 is not relevant because the rule does not say that if a card has an even number on one side, then it has a vowel on the other side.
Turning over the A card directly verifies the rule, while turning over the 3 card verifies the contraposition. 2. Vector Spaces and Matrices
2.1
We show this by contradiction. Suppose nm. Then, the number of columns of A is n. Since rank A is the maximum number of linearly independent columns of A, then rank A cannot be greater than nm, which contradicts the assumption that rank Am.
2.2 .
: Since there exists a solution, then by Theorem 2.1, rank ArankA.b. So, it remains to prove that rankAn. For this, suppose that rankAn note that it is impossible for rankAn since A has only n columns. Hence, there exists y 2 Rn, y 6 0, such that Ay0 this is because the columns of
1
A are linearly dependent, and Ay is a linear combination of the columns of A. Let x be a solution to Axb. Then clearly xy 6 x is also a solution. This contradicts the uniqueness of the solution. Hence, rank An.
: By Theorem 2.1, a solution exists. It remains to prove that it is unique. For this, let x and y be solutions, i.e., Axb and Ayb. Subtracting, we get Axy0. Since rankAn and A has n columns, then xy0 and hence xy, which shows that the solution is unique.
2.3
Consider the vectors a i1,ai2 Rn1, i1,,k. Since kn2, then the vectors a 1,,a k must be linearly independent in Rn1. Hence, there exist 1, . . . k, not all zero, such that
Xk
iai0.
i1
PThe first component of the above vector equation is Pki1 i0, while the last n components have the form ki1 iai0, completing the proof.
2.4
a. We first postmultiply M by the matrix
Ik O
Mmk,k Imk
to obtain
Note that the determinant of the postmultiplying matrix is 1. Next we postmultiply the resulting product
Mmk,k Imk Ik Mk,k O Mmk,k
OO
Imk O Ik
Imk. Mk,k O
by
to obtain
Notice that where
Imk O
The above easily follows from the fact that the determinant changes its sign if we interchange columns, as
Imk O
O Imk O IkIk O .
Mk,k O Imk O O Mk,k detMdet Ik O !detO Ik!,
O Mk,k Imk O detO Ik!1.
discussed in Section 2.2. Moreover,
det Ik O !detIkdetMk,kdetMk,k.
O Mk,k
detM detMk,k.
Hence,
b. We can see this on the following examples. We assume, without loss of generality that Mmk,kO and
let Mk,k2. Thus k1. First consider the case when m2. Then we have M O Imk0 1.
Mk,k O 20 2
Thus,
Next consider the case when m3. Then
det
O Mk,k
detM2detMk,k.260 . 1 037
Imkdet6 0 . 0 1 72 6 detMk,k. O 64 75
2 . 0 0 detM 6 detMk,k
detMdetMk,k. MA B
CD
Therefore, in general,
However, when km2, that is, when all submatrices are square and of the same dimension, then it is
true that See 121.
2.5
Let
and suppose that each block is kk. John R. Silvester 121 showed that if at least one of the blocks is equal to O zero matrix, then the desired formula holds. Indeed, if a row or column block is zero, then the determinant is equal to zero as follows from the determinants properties discussed Section 2.2. That is, if ABO, or ACO, and so on, then obviously det M0. This includes the case when any three or all four block matrices are zero matrices.
If BO or CO then
detMdetA BdetAD. CD
The only case left to analyze is when AO or DO. We will show that in either case, detM detBC.
Without loss of generality suppose that DO. Following arguments of John R. Silvester 121, we premul tiply M by the product of three matrices whose determinants are unity:
Hence,
Thus we have
Ik IkIk OIk IkA BC O. O Ik Ik Ik O Ik C O A B
detA BC O CO AB
det C det B
det Ik det C det B.
detA BdetBCdetCB. CO
3
2.6
We represent the given system of equations in the form Axb, where
26137
A 1 1 2 1 , x6x27, and b 1 . 1 2 0 1 435 2
x4
A1 1 2 1!1 1 2 1,
Using elementary row operations yields
1 2 0 1 0 3 2 2
and
1 1,
A,b1 1 2 1 1!1 1 2
1 2 0 1 2 0 3 2 2 3
from which rank A2 and rankA, b2. Therefore, by Theorem 2.1, the system has a solution. We next represent the system of equations as
1 1x112x3x4 1 2 x2 24
Assigning arbitrary values to x3 and x4 x3d3, x4d4, we get x11 1112x3x4
x2 1 2 24
12 112x3x4
3 1 124 4d31d4 .
213 2 4d31d4 3 243 213 203
33
12 d32 d4 33
Therefore, a general solution is
6 x2 76 12 d32 d4 762 7 d362 7 d46 17 ,
3333 3333
4x354d3 5415405405 x4 d4 010
where d3 and d4 are arbitrary values. 2.7
1. Apply the definition of a: 8
8:a if a00 if a0 a if a0
:a if a00 if a0 a if a0
a.
2. Ifa0,thenaa. Ifa0,thenaa0a. Henceaa. Ontheotherhand,aa
by the above. Hence, aaa by property 1. 4
a
3. Wehavefourcasestoconsider. First,ifa,b0,thenab0. Hence,ababab. Second,ifa,b0,thenab0. Henceabababab.
Third, if a0 and b0, then we have two further subcases:
1. Ifab0,thenababab. 2. Ifab0,thenababab.
The fourth case, a0 and b0, is identical to the third case, with a and b interchanged. 4. Wefirstshowabab. Wehave
abab
ab by property 3
ab by property 1.
To show abab, we note that aabbabb, which implies abab. On the other hand, from the above we have babaab by property 1. Therefore, abab. 5. Wehavefourcases. First,ifa,b0,wehaveab0andhenceababab. Second,ifa,b0, wehaveab0andhenceabababab. Third,ifa0,b0,wehaveab0andhence abababab. The fourth case, a0 and b0, is identical to the third case, with a and b
interchanged. 6. We have
abab by property 3cd.
7. : Byproperty2,aaandaa. Therefore,abimpliesaabandaab.
: Ifa0,thenaab. Ifa0,thenaab.
For the case whenis replaced by , we simply repeat the above proof withreplaced by . 8. This is simply the negation of property 7 apply DeMorgans Law.
2.8
Observe that we can represent hx, yi2 as
hx, yi2x 2 3 yQxQyxQ2y,
where
Note that the matrix QQ is nonsingular.
1. Now, hx, xi2Qx QxkQxk20, and
hx,xi2 0 , kQxk2 0 , Qx0
, x0
since Q is nonsingular.
2. hx, yi2QxQyQyQxhy, xi2. 3. We have
hxy,zi2xyQ2z
xQ2zyQ2z
hx,zi2 hy,zi2. 5
35
Q1 1. 12
4. hrx, yi2rxQ2yrxQ2yrhx, yi2.
2.9
We have kxkkxyykkxykkyk by the Triangle Inequality. Hence, kxkkykkxyk. On the other hand, from the above we have kykkxkkyxkkxyk. Combining the two inequalities, we obtain kxkkykkxyk.
2.10
Let0begiven. Set. Hence,ifkxyk,thenbyExercise2.9,kxkkykkxyk. 3. Transformations
3.1
Let v be the vector such that x are the coordinates of v with respect to e1,e2,,en, and x0 are the coordinates of v with respect to e01, e02, . . . , e0n. Then,
and
Hence,
which implies
3.2
a. We have
Therefore,
b. We have
Therefore,
3.3
We have
vx1e1 xnen e1,,enx,
vx01e01 x0ne0n e01,,e0nx0. e1,,enxe01,,e0nx0
x0e01,,e0n1e1,,enxTx.
261 2 437 e01,e02,e03e1,e2,e34 3 1 55.
4 5 3
21 2 431 Te01,e02,e031e1,e2,e364 3 1 575
4 5 3
261 2 337 e1, e2, e3e01, e02, e03 41 1 05 .
228 14 143
1 64 29 19 42 11 13
7 75. 7
345
261 2 337 T41 1 05.
345
262 2 337 e1,e2,e3e01,e02,e034 1 1 05.
1 2 1 6
Therefore, the transformation matrix from e01, e02, e03 to e1, e2, e3 is
262 2 337 T41 1 05,
1 2 1
Now, consider a linear transformation L : R3 ! R3, and let A be its representation with respect to
e1, e2, e3, and B its representation with respect to e01, e02, e03. Let yAx and y0Bx0. Then, y0TyTAxTAT1x0TAT1x0.
Hence, the representation of the linear transformation with respect to e01,e02,e03 is
3.4
We have
1 2631083741 8 4 5.
2 13 7
261 1 1 137
e01, e02, e03, e04e1, e2, e3, e4 60 1 1 17 . 40 0 1 15
BTAT
0001 Therefore, the transformation matrix from e1, e2, e3, e4 to e01, e02, e03, e04 is
21 1 1 131 21 1 0 03 T60 1 1 17 60 1 1 07.
40 0 1 15 40 0 1 15 0001 0001
Now, consider a linear transformation L : R4 ! R4, and let A be its representation with respect to
e1, e2, e3, e4, and B its representation with respect to e01, e02, e03, e04. Let y
Ax and y0Bx0.
Let v1, v2, v3, v4 be a set of linearly independent eigenvectors of A corresponding to the eigenvalues 1, 2, 3, and 4. Let Tv1, v2, v3, v4. Then,
Then, Therefore,
3.5
y0TyTAxTAT1x0TAT1x0.
265 3 4 337
1 27. 41 0 1 25
BTAT163 2 1114
Hence,
261 0 037 ATT 4 0 2 0 5 ,
0 0 3 7
40 0 3 05 0 0 0 4
AT
1v1,2v2,3v3,4v4v1,v2,v3,v46 0 2 0 0 7.
Av1, v2, v3, v4Av1, Av2, Av3, Av4261 0 0 0 37
or
1 2610037 T AT4 0 2 0 5 .
0 0 3
Therefore, the linear transformation has a diagonal matrix form with respect to the basis formed by a linearly independent set of eigenvectors.
Because
theeigenvaluesare1 2,2 3,3 1,and4 1.
detA2311,
From Aviivi, where vi 6 0 i1, 2, 3, the corresponding eigenvectors are
26037 26037 26 0 37 26 24 37 v1607, v2607, v36 2 7,and v46127.
Therefore, the basis we are interested in is
415 415 495 415 0119
826 037 26 037 26 0 37 26 24 379
607, 607, 6 2 7, 6127 . :415 415 495 4 1 5;
v1,v2,v3
Suppose v1, . . . , vn are eigenvectors of A corresponding to 1, . . . , n, respectively. Then, for each i
3.6
1119
1,,n, we have
which shows that 11,,1n are the eigenvalues of In A.
In Avi vi Avi vi ivi 1ivi
Alternatively, we may write the characteristic polynomial of InA as
InA1det1InInAdetInA1nA,
which shows the desired result.
3.7
Let x,y 2 V?, and , 2 R. To show that V? is a subspace, we need to show that xy 2 V?. For this, let v be any vector in V. Then,
vxyvxvy0,
since vxvy0 by definition.
3.8
The null space of A is N Ax 2 R3 : Ax0 . Using elementary row operations and backsubstitution, we can solve the system of equations:
26 42 0 37 26 42 0 37 26 42 0 37 4 x 12 x 20 42 1 15!40 2 15!40 2 152x2x3 0
2 3 1 0 2 1 0 0 0
1 11 213213
67647
x223, x12243x4x25415x3.
8
2 x3 1
Therefore,
3.9
8 26 1 37 9 NA:4245c : c 2 R;.
Let x,y 2 RA, and , 2 R. Then, there exists v,u such that xAv and yAu. Thus, xyAvAuAvu.
Hence, xy 2 RA, which shows that RA is a subspace. Letx,y2NA,and,2R. Then,Ax0andAy0. Thus,
AxyAxAy0. Hence, xy 2 N A, which shows that N A is a subspace.
3.10
Let v 2 RB, i.e., vBx for some x. Consider the matrix A v. Then, NANA v, since if u 2 NA, then u 2 NB by assumption, and hence uvuBxxBu0. Now,
dim RAdim N Am dim RA vdim N A vm.
and
Since dim N Adim N A v, then we have dim RAdim RA v. Hence, v is a linear combi
nation of the columns of A, i.e., v 2 RA, which completes the proof.
3.11
We first show VV??. Let v 2 V, and u any element of V?. Then uvvu0. Therefore, v2V??. ?? ??
We now show V V . Let a1,,ak be a basis for V , and b1,,bl a basis for V. Define Aa1 ak and Bb1 bl, so that VRA and V ??RB. Hence, it remains to show that RBRA. Using the result of Exercise 3.10, it suces to show that NANB. So let x 2 NA, which implies that x 2 RA?V?, since RA?NA. Hence, for all y, we have Byx0yBx, which implies that Bx0. Therefore, x 2 NB, which completes the proof.
3.12
Letw2W?,andybeanyelementofV. SinceVW,theny2W. Therefore,bydefinitionofw,wehave wy0. Therefore, w 2 V?.
3.13
Let rdimV. Let v1,,vr be a basis for V, and V the matrix whose ith column is vi. Then, clearly V RV.
Let u1,,unr be a basis for V?, and U the matrix whose ith row is ui . Then, V?RU, and VV??RU?NU by Exercise 3.11 and Theorem 3.4.
3.14
a. Let x 2 V. Then, xPxI Px. Note that Px 2 V, and I Px 2 V?. Therefore, xPxIPxisanorthogonaldecompositionofxwithrespecttoV. However,xx0isalsoan orthogonal decomposition of x with respect to V. Since the orthogonal decomposition is unique, we must have xP x.
b. Suppose P is an orthogonal projector onto V. Clearly, RPV by definition. However, from part a, xPxforallx2V,andhenceVRP. Therefore,RPV.
3.15
To answer the question, we have to represent the quadratic form with a symmetric matrix as x 11 811 1!xx 1 72x.
2 1 1 2 8 1 72 1 9
The leading principal minors are 11 and 2454. Therefore, the quadratic form is indefinite.
3.16
The leading principal minors are 12, 20, 30, which are all nonnegative. However, the eigenvalues of A are 0, 1.4641, 5.4641 for example, use Matlab to quickly check this. This implies that the matrix A is indefinite by Theorem 3.7. An alternative way to show that A is not positive semidefinite is to find a vector x such that xAx0. So, let x be an eigenvector of A corresponding to its negative eigenvalue 1.4641. Then, xAxxxxxkxk20. For this example, we can take x0.3251, 0.3251, 0.8881 , for which we can verify that x Ax1.4643.
3.17
a. The matrix Q is indefinite, since 21 and 32.
b. Let x 2 M. Then, x2x3x1, x1x3x2, and x1x2x3. Therefore,
xQxx1x2 x3x2x1 x3x3x1 x2x21 x2 x23. This implies that the matrix Q is negative definite on the subspace M.
3.18
a. We have
Then,
260 0 037 26137 fx1,x2,x3x2x1,x2,x340 1 05425.
0 0 0 x3
260 0 037 Q40 1 05
000
and the eigenvalues of Q are 10, 21, and 30. Therefore, the quadratic form is positive semidefinite.
b. We have
Then,
and the eigenvalues of Q are 12, 21p22, and 31p22. Therefore, the quadratic form is indefinite.
c. We have
Then,
fx1,x2,x3x21 22 x1x3 x1,x2,x34 0 2 0 5425. 1 0 0 x3
21 0 13213 6 2767
2 21 0 13
627 Q40 2 05
1 0 0 2
261 1 137 26137 fx1,x2,x3x21 x23 2x1x2 2x1x3 2x2x3 x1,x2,x341 0 15425.
1 1 1 x3
261 1 137 Q41 0 15
111
and the eigenvalues of Q are 10, 21p3, and 31p3. Therefore, the quadratic form is indefinite.
10
3.19
We have
Let
fx1, x2, x3
26 42 6 37 26 x 1 37
Q42 1 35, x4x25x1e1 x2e2 x3e3,
andqvQv fori,j1,2,3. ij i j
and, in this case, we get
Case of i1. From v1 Qe11,
Therefore,
viQei1,
2611 0 037 Q40 22 05.
0 0 33
11e1Qe111e1 Qe111q111.
421x29234x1x26x2x312x1x3
26 4 2 6 3726137×1,x2,x342 1 35425.
6 3 9 x3
where e1, e2, and e3 form the natural basis for R3.
Let v1, v2, and v3 be another basis for R3. Then, the vector x is represented in the new basis as x , where
xv1,v2,v3x V x .
Now, fxxQxV x QV x x V QV x x Qx , where
2qqq 3 611 12 137
Q4qqq 5 21 22 23
qqq31 32 33
Wewillfindabasisv ,v ,v suchthatq0fori6j,andisoftheform 123 ij
v111e1
v221e122e2
v331e132e233e3
Because
wededucethatifviQej 0forji,thenviQvj 0. Inthiscase,
qvQv vQ e e vQe vQe, ij i j i j1 1 jj j j1 i 1 jj i j
qvQv vQ e e vQe vQe vQe. ii i i i i1 1 ii i i1 i 1 ii i i ii i i
Our task therefore is to find vi i1, 2, 3 such that
viQej0, ji
6 3 9 x3
11 11 1 q11 1 4
213v111e16047.
11
405 0
Case of i2. From v2 Qe10,
21e122e2Qe121e1 Qe122e2 Qe121q1122q210. From v2 Qe21,
Therefore,
21e122e2Qe221e1 Qe222e2 Qe221q1222q221. q11 q21 210 .
q12 q22 22 1
But, since 20, this system of equations is inconsistent. Hence, in this problem v2 Qe20 should be satisfied instead of v2 Qe21 so that the system can have a solution. In this case, the diagonal
matrix becomes
2611 0 037 Q40 0 05,
0 0 33
q11 q21210211 ,
where 22 is an arbitrary real number. Thus,
213
v2 21e1 22e2 641275a, 0
Since in this case 3detQ0, we will have to apply the same reasoning of the previous case and use the condition v3 Qe30 instead of v3 Qe31. In this way the diagonal matrix becomes
2611 0 037 Q40 005.
000 Thus, from v3 Qe10, v3 Qe20 and v3 Qe30,
and the system of equations become
q12 q22 22 0 22 12 22
where a is an arbitrary real number. Case of i3.
Therefore,
263137 263137Q 4325Q 4325
33 33
26 4 2 6 37263137 2603742 1 354325405.
6 3 9 33 0
26q11 q21 4q12 q22 q13 q23
q3137 263137 q325 4325 q33 33
263137 26 31 37 43254231 3335,
33 33 12
where 31 and 33 are arbitrary real numbers. Thus,
26 b 37 v3 31e1 32e2 33e3 42b3c5,
c
We represent this quadratic form as fxxQx, where
26 1 1 37 Q4 1 25.
1 2 5
The leading principal minors of Q are 11, 212, 3524. For the quadratic form to be positive definite, all the leading principal minors of Q must be positive. This is the case if and only if2 45, 0.
3.21
The matrix QQ0 can be represented as QQ12Q12, where Q12Q120. 1. Now, hx, xiQQ12xQ12xkQ12xk20, and
hx,xiQ0 , kQ12xk20 , Q12x0
hxy,ziQxyQz
xQzyQz
hx,ziQ hy,ziQ. 4. hrx, yiQrxQyrxQyrhx, yiQ.
where b and c are arbitrary real numbers. Finally,
21ab3 6427
where a, b, and c are arbitrary real numbers. 3.20
V x1,x2,x340 a 2b3c5, 00c
, x0 2. hx,yiQxQyyQxyQxhy,xiQ.
since Q12 is nonsingular. 3. We have
3.22
We have
We first show that kAk1maxi
PkAk1maxkAxk1 : kxk11.
nk1 aik. For this, note that for each x such that kxk11, we have
kAxk1
aik xk
XXn
n
max
ik1
max aik xk
i
k1
k1
max 13
aik,
Xn i
since xkmaxk xkkxk11. Therefore,
Pn k1 maxi k1 aik. So, let j be such that
Xn
ajkmax
Define xby
Clearly kx k11. Furthermore, for i 6 j,
and Therefore,
3.23
We have
We first show that kAk1maxk
XnXna j k xk
k1 k1
a j k.
Xn
ajkmax
i k1
aik.
Xn aik.
k1
if ajk 6 0 otherwise.
XnXn
aikx kaikmax
k1 k1 ik1 k1
k1
i
ajkajk x k 1
XnXn kAx k1max aikx k
aik.
i k1k1
Xn i
kAk1max
To show that kAk1maxi Pn aik, it remains to find a x2 Rn, kx k11, such that kAx k1
k1
Xn Xn aik
ajk
since Pnk1 xkkxk11. Therefore,
Xm
P kAk1maxkAxk1 : kxk11.
mi1 aik. For this, note that for each x such that kxk11, we have
kAxk1
Xm Xn aik xk
i1 k1
Xm Xn i1 k1
aik xknm
XX!
xkaikk1 i1
XXm max aik
Xn
xk
k1
k
m
max aik,
k
i1
kAk1max k
i1
aik.
14
i1
To shPow that kAk1maxk Pmi1 aik, it remains to find a x2 Rm, kx k11, such that kAx k1maxk mi1 aik. So, let j be such that
i1 i1 Definex by 1 ifkj
Clearly kx k11. Furthermore,
kAx k1 aikx k
4. Concepts from Geometry
4.1
Xm Xm
aijmax aik. k
x k0 otherwise .
Xm XnXm i1 k1i1
Xm k i1
aijmax
aik.
: Let Sx : Axb be a linear variety. Let x, y 2 S and2 R. Then,
Ax1yAx1Ayb1bb.
Therefore, x1y 2 S.
: If S is empty, we are done. So, suppose x0 2 S. Consider the set S0Sx0xx0 : x 2 S.
Clearly, for all x,y 2 S0 and2 R, we have x1y 2 S0. Note that 0 2 S0. We claim that S0
is a subspace. To see this, let x,y 2 S0, and2 R. Then, xx10 2 S0. Furthermore,
1 x1 y 2 S0, and therefore xy 2 S0 by the previous argument. Hence, S0 is a subspace. Therefore, by 22
SS0x0yx0:y2NAyx0 :Ay0
yx0 :Ayx0b
x:Axb.
4.2
Letu,v2x2Rn :kxkr,and20,1. Supposezu1v. Toshowthatisconvex, weneedtoshowthatz2,i.e.,kzkr. Tothisend,
kzk2u 1vu1v
2kuk221uv12kvk2.
Since u,v 2 , then kuk2r2 and kvk2r2. Furthermore, by the CauchySchwarz Inequality, we have uvkukkvkr2. Therefore,
kzk2 2r2 21r2 12r2 r2.
Hence, z 2 , which implies thatis a convex set, i.e., the any point on the line segment joining u and v
is also in .
4.3
Letu,v2x2Rn :Axb,and20,1. Supposezu1v. Toshowthatisconvex, weneedtoshowthatz2,i.e.,Azb. Tothisend,
AzAu1vAu1Av.
Exercise 3.13, there exists A such that S0NAx : Ax0. Define bAx0. Then,
15
Since u,v 2 , then Aub and Avb. Therefore,
Azb1bb,
and hence z 2 .
4.4
Letu,v2x2Rn :x0,and20,1. Supposezu1v. Toshowthatisconvex, we need to show that z 2 , i.e., z0. To this end, write xx1,,xn, yy1,,yn, and zz1,,zn. Then, zixi 1yi, i1,,n. Since xi,yi0, and ,10, we have zi0. Therefore, z0, and hence z 2 .
5. Elements of Calculus
5.1
5.2
Observe that
Therefore, if kAk1, then limk!1 kAkkO which implies that limk!1 AkO.
kAkkkAk1kkAkkAk2kkAk2kAkk.
For the case when A has all real eigenvalues, the proof is simple. Letbe the eigenvalue of A with largest
absolute value, and x the corresponding normalized eigenvector, i.e., Axx and kxk1. Then, kAkkAxkkxkkxk,
which completes the proof for this case.
In general, the eigenvalues of A and the corresponding eigenvectors may be complex. In this case, we
proceed as follows see 41. Consider the matrix BA,
kAk whereis a positive real number. We have
kBk kAk 1. kAk
By Exercise 5.1, Bk ! O as k ! 1, and thus by Lemma 5.1, iB1, i1,,n. On the other hand,
for each i1,,n,
and thus
which gives
Since the above arguments hold for any 0, we have iAkAk.
5.3
a. rfxabbax. b. Fxabba.
iBiA , kAk
iB iA 1. kAk
iAkAk.
16
5.4
We have and
By the chain rule,
5.5
We have and
By the chain rule,
and
5.6
We have and
Dfxx13,x22, dgt3.
d Ft dt
dt 2 Dfgtdgt
5t1.
Dfxx22,x12,
dt 3 3t53, 2t62 2
gs,t4, gs,t3. s 2 t 1
fgs,t s
fgs,t t
Dfgtgs,t
s
12st,4s3t 4 22
8s5t,
Dfgtgs,t
t 12st,4s3t 3
215s3t.
Dfx3x21x2x23
d xt 64 2 t 75 .
x2, x31x23x1, 2x31x2x31 2et3t23
dt 1
17
By the chain rule,
d fxt dt
Dfxtdxt dt
2et3t233x1t2x2tx3t2x2t, x1t3x3t2x1t, 2x1t3x2tx3t1 64 2t 75
12tet 3t23 2tet 6t2 2t1. Let 0 be given. Since fxogx, then
lim kfxk0. x!0 gx
Hence, there exists 0 such that if kxk, then
kfxk,
5.7
1
which can be rewritten as
gx
kfxkgx.
5.8
By Exercise 5.7, there exists 0 such that if kxk, then ogxgx2. Hence, if kxk, x 6 0,
then
5.9
We have that
and
fxgxogxgxgx21gx0. 2
x:f1x12x:x21 x2 12,
x:f2x16x:x2 81.
To find the intersection points, we substitute x281 into x21x212 to get x411221640.
Solving gives x2116, 4. Clearly, the only two possibilities for x1 are x14, 4, from which we obtain x22, 2. Hence, the intersection points are located at 4, 2 and 4, 2.
The level sets associated with f1x1, x212 and f2x1, x216 are shown as follows.
18
f1x1,x212
f1x1,x212 4,2
x2
f2x1,x216
3 2 1
12 12312 4,2
f2x1,x216
x1
5.10
a. We have We compute
Hence,
b. We compute
fxfxoDfxoxxo1xxoD2fxoxxo . 2
Dfxex2,x1ex2 1, 0 ex2
D2fx. ex2 x1 ex2
21,01 1 11 1,2 0 11 1 x22 112
fx
1x1x2x1x21x2.
2
Dfx431 4x1x2,4x21x2 432,
D2fx1221 42 8x1x2 . 8x1x2 421 122
Expanding f about the point xo yields
fx48,81 1 11 1,2 116 81 1
x21 2 8 16 x21821 82 161 162 8x1x2 12 .
19
c. We compute
D2fxExpanding f about the point xo yields
.
Dfxex1x2 ex1x2 1,ex1x2 ex1x2 1,
ex1x2ex1x2 ex1x2ex1x2
ex1x2ex1x2ex1x2ex1x2
fx22e2e1,11 1 11 1,x22e 0x1 1 x22 02ex2
11 x2 e1x21 x2 .
20
6. Basics of Unconstrained Optimization
6.1
a. In this case, x is definitely not a local minimizer. To see this, note that d1,2 is a feasible direction at x. However, drfx1, which violates the FONC.
b. In this case, x satisfies the FONC, and thus is possibly a local minimizer, but it is impossible to be definite based on the given information.
c. In this case, x satisfies the SOSC, and thus is definitely a strict local minimizer.
d. In this case, x is definitely not a local minimizer. To see this, note that d0, 1 is a feasible direction
at x, and drfx0. However, dFxd1, which violates the SONC.
6.2
Because there are no constraints on x1 or x2, we can utilize conditions for unconstrained optimization. To proceed, we first compute the function gradient and find the critical points, that is, the points that satisfy
the FONC,
The components of the gradient rfx1,x2 are
rfx1, x20.
f x214 and f x216.
x1 x2
x1 2, x22 , x3 2, and x4 2.
Thus there are four critical points:
4 4 4 4
We next compute the Hessian matrix of the function f: Fx2x1 0 .
Note that Fx10 and therefore, x1 is a strict local minimizer. Next, Fx40 and therefore, x4 is a strict local maximizer. The Hessian is indefinite at x2 and x3 and so these points are neither maximizer nor minimizers.
6.3
Supposex isaglobalminimizeroff over,andx 20 . Letx20. Then,x2andtherefore fxfx. Hence, x is a global minimizer of f over 0.
6.4
Suppose x is an interior point of . Therefore, there exists 0 such that y : kyxk. Since x isalocalminimizeroffover,thereexists0 0suchthatfxfxforallx2y:kyxk0. Take00 min,0. Then,y:kyxk000,andfxfxforallx2y:kyxk00. Thus, x is a local minimizer of f over 0.
To show that we cannot make the same conclusion if x is not an interior point, let 0, 01, 1, and fxx. Clearly, 0 2is a local minimizer of f over . However, 0 2 0 is not a local minimizer of f over 0.
6.5
a. The TONC is: if f0000, then f00000. To prove this, suppose f0000. Now, by the FONC, we also have f000. Hence, by Taylors theorem,
fxf0f0000x3ox3. 3!
Since 0 is a local minimizer, fxf0 for all x suciently close to 0. Hence, for all such x, f0000x3ox3.
3!
21
0 22
Now, if x0, then
which implies that f00000. On the other hand, if x0, then
ox3f00003! x3 ,
which implies that f00000. This implies that f00000, as required.
b. Let fxx4. Then, f000, f0000, and f00000, which means that the FONC, SONC, and
TONC are all satisfied. However, 0 is not a local minimizer: fx0 for all x 6 0. c. The answer is yes. To see this, we first write
fxf0f00xf000x2f0000x3. 2 3!
Now, if the FONC is satisfied, then
fxf0f000x2f0000x3. 2 3!
Moreover, if the SONC is satisfied, then either i f0000 or ii f0000. In the case i, it is clear from the above equation that fxf0 for all x suciently close to 0 because the third term on the righthand side is ox2. In the case ii, the TONC implies that fxf0 for all x. In either case, fxf0 for all x suciently close to 0. This shows that 0 is a local minimizer.
6.6
a. The TONC is: if f000 and f0000, then f00000. To prove this, suppose f000 and f0000. By Taylors theorem, for x0,
fxf0f0000x3ox3. 3!
ox3f00003! x3 ,
Since 0 is a local minimizer, fxf0 for suciently small x0. Hence, for all x0 suciently small, ox3
f00003! x3 . This implies that f00000, as required.
b. Let fxx4. Then, f000, f0000, and f00000, which means that the FONC, SONC, and TONC are all satisfied. However, 0 is not a local minimizer: fx0 for all x0.
6.7
For convenience, let z0x0 argminx2 fx. Thus we want to show that z0argminy20 fy; i.e., for all y 2 0, fyx0fz0x0. So fix y 2 0. Then, yx0 2 . Hence,
which completes the proof.
6.8
a. The gradient and Hessian of f are
fz0x0, rfx21 33
fyx0
Fx21 3. 37
minfx x2
f arg min f x x2
22
375
Hence, rf1,111,25, and F1,1 is as shown above.
b. The direction of maximal rate of increase is the direction of the gradient. Hence, the directional derivative
with respect to a unit vector in this direction is
rfxrfxrfxrfxkrfxk.
krf xk p krf xk Atx1,1,wehavekrf1,1k 11225227.31.
c. The FONC in this case is rfx0. Solving, we get
x32.
1
The point above does not satisfy the SONC because the Hessian is not positive semidefinite its determinant
is negative.
6.9
a. A dierentiable function f decreases most rapidly in the direction of the negative gradient. In our problem, r fx hff i h 2 x 1 x 2x 32 x 213 x 1 x 2 2 i.
rf x0 h5 10i . b. The rate of increase of f at x0 in the direction rf x0 is
rfx0
rf x0 krf x0 k 1255 5.
x1 x2 Hence, the direction of most rapid decrease is
p p rfx0 d h5 10i3111.
krf x0 k
c. The rate of increase of f at x0 in the direction d is
kdk 45 fx1x4 4xx37.
rfx4 43, 424
Fx4 4. 42
Hence rf0,17,6. The directional derivative is
1, 0rf0, 17.
6.10
a. We can rewrite f as
The gradient and Hessian of f are
2424
23
b. The FONC in this case is rfx0. The only point satisfying the FONC is x1 5.
42
The point above does not satisfy the SONC because the Hessian is not positive semidefinite its determinant
is negative. Therefore, f does not have a minimizer.
6.11
a. Write the objective function as fxx2. In this problem the only feasible directions at 0 are of the form dd1,0. Hence, drf00 for all feasible directions d at 0.
b. The point 0 is a local maximizer, because f00, while any feasible point x satisfies fx0.
The point 0 is not a strict local maximizer because for any x of the form xx1,0, we have fx
0f0, and there are such points in any neighborhood of 0.
The point 0 is not a local minimizer because for any point x of the form xx1,x21 with x10, we
have fxx410, and there are such points in any neighborhood of 0. Since 0 is not a local minimizer, it is also not a strict local minimizer.
6.12
a. We have rfx0,5. The only feasible directions at x are of the form dd1,d2 with d20. Therefore, for such feasible directions, drfx5d20. Hence, x0,1 satisfies the first order necessary condition.
b. We have F xO. Therefore, for any d, dF xd0. Hence, x0, 1 satisfies the second order necessary condition.
c. Consider points of the form xx1, x211, x1 2 R. Such points are in , and are arbitrarily close to x. However, for such points x 6 x,
fx5x21 15521 5fx. Hence, x is not a local minimizer.
6.13
a. We have rfx3,0. The only feasible directions at x are of the form dd1,d2 with d10. Therefore, for such feasible directions, drfx3d10. Hence, x2,0 satisfies the first order necessary condition.
b. We have F xO. Therefore, for any d, dF xd0. Hence, x2, 0 satisfies the second order necessary condition.
c. Yes, x is a local minimizer. To see this, notice that any feasible point xx1,x2 6 x is such that x12. Hence, for such points x 6 x,
In fact, x is a strict local minimizer.
6.14
fx316fx.
a. We have rfx0,1, which is nonzero everywhere. Hence, no interior point satisfies the FONC. Moreover, any boundary point with a feasible direction d such that d20 cannot be satisfy the FONC, because for such a d, drfxd20. By drawing a picture, it is easy to see that the only boundary point remaining is x0, 1. For this point, any feasible direction satisfies d20. Hence, for any feasible direction, drfxd20. Hence, x0,1 satisfies the FONC, and is the only such point.
b. We have F xO. So any point and in particular x0, 1 satisfies the SONC.
c. The point x0, 1 is not a local minimizer. To see this, consider points of the form xp1x2, x2 where x2 2 12, 1. It is clear that such points are feasible, and are arbitrarily close to x0, 1. However, for such points, fxx21fx.
24
6.15
a. We have rfx3,0. The only feasible directions at x are of the form dd1,d2 with d10. Therefore, for such feasible directions, drfx3d10. Hence, x2,0 satisfies the first order necessary condition.
b. We have F xO. Therefore, for any d, dF xd0. Hence, x2, 0 satisfies the second order necessary condition.
c. Consider points of the form xx22, x2, x2 2 R. Such points are in , and could be arbitrarily close to x. However, for such points x 6 x,
fx3x2 2662 6fx. Hence, x is not a local minimizer.
6.16
a. We have rfx0. Therefore, for any feasible direction d at x, we have drfx0. Hence, x
satisfies the firstorder necessary condition. b. We have
Any feasible direction d at x has the form dd1,d2 where d22d1, d1,d20. Therefore, for any feasible direction d at x, we have
dFxd8d21 2d2 8d21 22d12 0. Hence, x satisfies the secondorder necessary condition.
c. We have fx0. Any point of the form xx1,x2121, x10, is feasible and has objective function value given by
fx4x21 x21 212 x41 4x310fx,
Moreover, there are such points in any neighborhood of x. Therefore, the point x is not a local minimizer.
6.17
a. We have rfx11,12. If x were an interior point, then rfx0. But this is clearly impossible. Therefore, x cannot possibly be an interior point.
b. We have F x diag1x21, 12, which is negative definite everywhere. Therefore, the secondorder necessary condition is satisfied everywhere. Note that because we have a maximization problem, negative definiteness is the relevant condition.
6.18
so that xis the minimizer of f. By the FONC, and hence Xn
Fx8 0. 0 2
Given x 2 R, let Xn fx
which on solving gives
xxi2, f 0x0 ,
2x xi0, i1
1 Xn x n xi.
i1 25
i1
6.19
Let 1 be the angle from the horizontal to the bottom of the picture, and 2 the angle from the horizontal to the top of the picture. Then, tantan2tan11tan2 tan1. Now, tan1bx and tan2abx. Hence, the objective function that we wish to maximize is
fx abxbxa 1babx2 xbabx
. a2bab
We have
Let x be the optimal distance. Then, by the FONC, we have f0x0, which gives
f0xxbabx2 1×2 . 1babp0
x 2
xbab.
The squared distance from the sensor to the babys heart is 1×2, while the squared distance from the sensor to the mothers heart is 12×2. Therefore, the signal to noise ratio is
6.20
We have
122 fx1x2 .
22x1x22x12x2 1×22
f0x
122 .
42 21
By the FONC, at the optimal position x, we have f0x0. Hence, either x1p2 or x1p2.
From the figure, it easy to see that x1p2 is the optimal position. 6.21
a. Let x be the decision variable. Write the total travel time as fx, which is given by
p1x2 p1dx2 fx vv .
12
Dierentiating the above expression, we get
f0x pxp dx .
v1 12 v2 1dx2
By the first order necessary condition, the optimal path satisfies f0x0, which corresponds to
pxpdx , v1 12 v2 1dx2
or sin 1v1sin 2v2. Upon rearranging, we obtain the desired equation. b. The second derivative of f is given by
f00x 11 . v11x232 v21dx232
Hence, f00x0, which shows that the second order sucient condition holds. 26
6.22
a. We have fxU1x1U2x2 and x : x1, x20, x1x21. A picture oflooks like: x2
1
0 11
b. We have rfxa1,a2. Because rfx 6 0, for all x, we conclude that no interior point satisfies the FONC. Next, consider any feasible point x for which x20. At such a point, the vector d1, 1 is a feasible direction. But then drfxa1a20 which means that FONC is violated recall that the problem is to maximize f. So clearly the remaining candidates are those x for which x20. Among these, if x11, then d0,1 is a feasible direction, in which case we have drfxa20. This leaves the point x1,0. At this point, any feasible direction d satisfies d10 and d2d1. Hence, for any feasible direction d, we have
drfxd1a1d2a2d1a1d1a2d1a1a20. So, the only feasible point that satisfies the FONC is 1,0.
c. We have FxO0. Hence, any point satisfies the SONC again, recall that the problem is to maximize f.
6.23
We have
Setting rfx0 we get
rfx 41 x23 21 2 . 41 x23 22 2
41 x23 21 20 41 x23 22 20.
Adding the two equations, we obtain x1x2, and substituting back yields x1 x2 1.
Hence, the only point satisfying the FONC is 1,1.
We have Hence
Fx12x1 x22 2 121 x22 . 121×22 121×222
F1,12 0 0 2
Since F 1, 1 is not positive semidefinite, the point 1, 1 does not satisfy the SONC.
6.24
Suppose d is a feasible direction at x. Then, there exists 00 such that xd 2for all2 0, 0. Let 0 be given. Then, xd 2for all2 0,0. Since 00, by definition d is also a feasible direction at x.
6.25
: Suppose d is feasible at x 2 . Then, there exists 0 such that xd 2 , that is, Axdb.
Since Axb and6 0, we conclude that Ad0. 27
: Suppose Ad0. Then, for any2 0,1, we have Ad0. Adding this equation to Axb, we obtainAxdb,thatis,xd2forall20,1. Therefore,disafeasibledirectionatx.
6.26
The vector d1, 1 is a feasible direction at 0. Now,
drf0 f 0 f 0.
x1 x2
Since rf00 and rf0 6 0, then
Hence, by the FONC, 0 is not a local minimizer.
6.27 We have rfxc 6 0. Therefore, for any x 2, we have rfx 6 0. Hence, by Corollary 6.1, x 2 cannot be a local minimizer and therefore it cannot be a solution.
6.28
The objective function is fxc1x1c2x2. Therefore, rfxc1,c2 6 0 for all x. ThuSs, by FONC, the optimal solutionSx cannot lie in the interior of the feasible set. Next, for all x 2 L1 L2, d1,1 is a feasible direction. Therefore, drfxc1c20. Hence, by FONC, the optimal solution x cannot lie in L1 L2. Lastly, for all x 2 L3, d1,1 is a feasible direction. Therefore, drfxc2c10. Hence, by FONC, the optimal solution x cannot lie in L3. Therefore, by elimination, the unique optimal feasible solution must be 1,0.
6.29
drf00.
a. We write
1 Xn
fa,bn a2x2i b2 yi2 2xiab2xiyia2yib i1 ! !
1 Xn 1 Xn
a2 x2ib22 xiab
n i1
1Xn ! 1Xn ! 1Xn !
n i1
2 n xiyi a2 n yi b n yi2
i1 i1 i1
1 Pn xa n i1 i n i1 i
1 Pn x2 ab1Pnxi 1 b
n i1
1Xn 1Xn a1Xn
2 n xiyi,n
i1 i1 i1
yi b n yi2
b. If the point za,b is a solution, then byPthe FONC, we have rfz2Qz 2c0,
zQz2czd, where z, Q, c and d are defined in the obvious way.
which means Qzc. Now, since X2 X21 n xi X2, and the xi are not all equal, then n i1
X2X2 X X2 Y X2Y XXYX2X2
det QX2X2 6 0. Hence, Q is nonsingular, and hence
2 XY XY3 zQ1c 1 1 X XY 4 X2X2 5.
Since Q0, then by the SOSC, the point z is a strict local minimizer. Since z is the only point satisfying the FONC, then z is the only local minimizer.
c. We have
a Xb X YX Y XX 2 Y X X Y Y . X2X2 X2X2
28
6.30
Given x 2 Rn, let
be the average squared error between x and x1, . . . , xp. We can rewrite f as
fx
1 Xp p i1
!
1Xp 1
fx
1p X
kxxik2 xxixxi
p i1
Hence, we get
i.e., xis just the average, or centroid, or center of gravity, of x1 , . . . , xp .
xx2 xi xkxik2. p i1 p
So f is a quadratic function. Since xis the minimizer of f, then by the FONC, rfx 0, i.e.,
1 Xp p i1
2 x 2 x
xi 0 .
1 Xp p i1
xi,
The Hessian of f at xis
which is positive definite. Hence, by the SOSC, xis a strict local minimizer of f in fact, it is a strict global
Fx2 I n , minimizer because f is a convex quadratic function.
6.31
Fix any x 2 . The vector dxx is feasible at x by convexity of . By Taylors formula, we have fxfxdrfxokdkfxckdkokdk.
Therefore, for all x suciently close to x, we have fxfx. Hence, x is a strict local minimizer. 6.32
Since f 2 C2, F xF x. Let d 6 0 be a feasible directions at x. By Taylors theorem, fxdfx1drfxdFxdokdk2.
and the proof is completed.
6.33
Necessity follows from the FONC. To prove suciency, we write f as
fx1xxQxx1xQx
where xQ1b is the unique vector satisfying the FONC. Clearly, since 1xQx is a constant, and
2
Using conditions a and b, we get
Therefore, for all d such that kdk is suciently small,
fxdfxckdk2okdk2, fxdfx,
22
Q0, then
2
fxfx1xQx, 2
29
and fxfx if and only if xx. 6.34
Write uu1,,un. We have
xn
aaxn2bun1 bun
cu,
where can1b, . . . , ab, b. Therefore, the problem can be written as
minimize ruuqcu, which is a positive definite quadratic in u. The solution is therefore
uq c, 2r
or, equivalently, uiqanib2r, i1, . . . , n. 7. One Dimensional Search Methods
7.1
axn1bun
abun1
anx0 an1bu1 abun1 bun
a2 xn2.
bun
The range reduction factor for 3 iterations of the Golden Section method is
the Fibonacci method with 0 is 1F310.2. Hence, if the desired range reduction factor is anywhere between 0.2 and 0.236 e.g., 0.21, then the Golden Section method requires at least 4 iterations, while the Fibonacci method requires only 3. So, an example of a desired final uncertainty range is 0.21850.63.
7.2
a. The plot of fx versus x is as below: 3.2
3.1 3 2.9 2.8 2.7 2.6 2.5 2.4
2.3
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
x
b. The number of steps needed for the Golden Section method is computed from the inequality:
p
51230.236, while that of
0.61803N0.2 21
30
N3.34.
fx
Therefore, the fewest possible number of steps is 4. Applying 4 steps of the Golden Section method, we end up with an uncertainty interval of a4,b01.8541,2.000. The table with the results of the intermediate steps is displayed below:
c. The number of steps needed for the Fibonacci method is computed from the inequality: 12 0.2N4.
FN1 21
Therefore, the fewest possible number of steps is 4. Applying 4 steps of the Fibonacci method, we end up with an uncertainty interval of a4 , b0 1.8750, 2.000. The table with the results of the intermediate steps is displayed below:
Iteration k
ak
bk
fak
fbk
New uncertainty interval
1
1.3820
1.6180
2.6607
2.4292
1.3820,2
2
1.6180
1.7639
2.4292
2.3437
1.6180,2
3
1.7639
1.8541
2.3437
2.3196
1.7639,2
4
1.8541
1.9098
2.3196
2.3171
1.8541,2
Iteration k
k
ak
bk
fak
fbk
New unc. int.
1
0.3750
1.3750
1.6250
2.6688
2.4239
1.3750,2
2
0.4
1.6250
1.7500
2.4239
2.3495
1.6250,2
3
0.3333
1.7500
1.8750
2.3495
2.3175
1.7500,2
4
0.45
1.8750
1.8875
2.3175
2.3169
1.8750,2
d. Wehavef0x2x4sinx,f00x24cosx. Hence,Newtonsalgorithmtakestheform: xk2 sin xk
xk1 xk12cosxk .
Applying 4 iterations with x01, we get x17.4727, x214.4785, x36.9351, x416.6354.
Apparently, Newtons method is not eective in this case.
7.3
a. We first create the Mfile f.m as follows:
f.m
function yfx
y8exp1x7logx;
The MATLAB commands to plot the function are:
fplotf,1 2;
xlabelx;
ylabelfx;
The resulting plot is as follows:
31
8
7.95
7.9
7.85
7.8
7.75
7.7
7.65
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
x
b. The MATLAB routine for the Golden Section method is:
Matlab routine for Golden Section Search
left1;
right2;
uncert0.23;
rho3sqrt52;
Nceilloguncertrightleftlog1rho print N
lowera;
aleft1rhorightleft;
fafa;
for i1:N,
if lowera
ba
fbfa
aleftrhorightleft
fafa
else ab
fafb
bleft1rhorightleft
fbfb
end if
if fafb
rightb;
lowera
else
lefta;
lowerb
end if
NewIntervalleft,right
end for i
Using the above routine, we obtain N4 and a final interval of 1.528, 1.674. The table with the results of the intermediate steps is displayed below:
32
fx
Iteration k
ak
bk
fak
fbk
New uncertainty interval
1
1.382
1.618
7.7247
7.6805
1.382,2
2
1.618
1.764
7.6805
7.6995
1.382,1.764
3
1.528
1.618
7.6860
7.6805
1.528,1.764
4
1.618
1.674
7.6805
7.6838
1.528,1.674
c. The MATLAB routine for the Fibonacci method is:
Matlab routine for Fibonacci Search technique
left1;
right2;
uncert0.23;
epsilon0.05;
F11;
F21;
N0;
while FN212epsilonrightleftuncert
FN3FN2FN1;
NN1;
end while
N print N
lowera;
aleftFN1FN2rightleft;
fafa;
for i1:N,
if iN
rho1FN2iFN3i
else
rho0.5epsilon
end if
if lowera
ba
fbfa
aleftrhorightleft
fafa
else ab
fafb
bleft1rhorightleft
fbfb
end if
if fafb
rightb;
lowera
else
lefta;
lowerb
end if
NewIntervalleft,right
end for i
33
Using the above routine, we obtain N3 and a final interval of 1.58, 1.8. The table with the results of the intermediate steps is displayed below:
Iteration k
k
ak
bk
fak
fbk
New uncertainty interval
1
0.4
1.4
1.6
7.7179
7.6805
1.4,2
2
0.333
1.6
1.8
7.6805
7.7091
1.4,1.8
3
0.45
1.58
1.6
7.6812
7.6805
1.58,1.8
7.4
Now, k1FNk1 . Hence, FN k2
1
k
1k
1 1FNk1FNk2 FNk1FNk21FNk2FNk1
FN k11 FNk
FN k1k1
To show that 0k12, we proceed by induction. Clearly 112 satisfies 0112. Suppose 0k 12,wherek21,,N1. Then,
and hence Therefore,
Sincek1 1 k ,then 1k
as required.
7.5
11k1 2
1 1 2. 1k
1 k 1. 2 1k
0k11 2
We proceed by induction. For k2, we have F0F3F1F21312112. Suppose Fk2Fk1Fk1Fk1k. Then,
7.6
Fk1Fk2FkFk1Fk1Fk1FkFk1Fk2Fk1Fk1FkFk2Fk1
1k1k1.
Define ykFk and zkFk1. Then, we have
yk1Ayk,
where
zk1 zk A1 1,
10 34
with initial condition
We can write
y01. z0 0
Fnyn1,0yn1,0An 1. zn 0
Since A is symmetric, it can be diagonalized as
where
and
Therefore, we have
7.7
Au1 0 hu vi v 02
11p5, 21p5, 22
1 2q2p513 u514 4qp5125,
1 0 n Fnu 0u
1 0 1p5!n1 1p5!n11A p5 22 .
1 2 qp512 3 v 514 4q2p515.
2u 21n1u 2 2n2
The number log2 is the root of the equation gx0, where gxexpx2. The derivative of g is g0xexpx. Newtons method applied to this root finding problem is
expxk2
xk1xkexpxkxk12 expxk.
Performing two iterations, we get x10.7358 and x20.6940.
7.8
a. We compute g0x2exex12. Therefore Newtons method of tangents for this problem takes the form
exk 1exk 1 2exkexk 12
xk1xk
xk
xk sinhxk.
e2xk 1 2exk
b. By symmetry, we need x1x0 for cycling. Therefore, x0 must satisfy x0x0sinh x0.
The algorithm cycles if x0c, where c0 is the solution to 2csinh c. 35
c. The algorithm converges to 0 if and only if x0c, where c is from part b.
7.9
The quadratic function that matches the given data xk, xk1, xk2, fxk, fxk1, and fxk2 can be computed by solving the following three linear equations for the parameters a, b, and c:
axki2 bxki cfxki, i0,1,2.
Then, the algorithm is given by xk1b2a so, in fact, we only need to find the ratio of a and b. With some elementary algebra e.g., using Cramers rule without needing to calculate the determinant in the denominator, the algorithm can be written as:
xk112fxk20fxk101fxk2 212fxk20fxk101fxk2
where ijxki2xkj2 and ijxkixkj. 7.10
a. A MATLAB routine for implementing the secant method is as follows.
function x,vsecantg,xcurr,xnew,uncert;
Matlab routine for finding root of gx using secant method
secant;
secantg;
secantg,xcurr,xnew;
secantg,xcurr,xnew,uncert;
xsecant;
xsecantg;
xsecantg,xcurr,xnew;
xsecantg,xcurr,xnew,uncert;
x,vsecant;
x,vsecantg;
x,vsecantg,xcurr,xnew;
x,vsecantg,xcurr,xnew,uncert;
The first variant finds the root of gx in the Mfile g.m, with
initial conditions 0 and 1, and uncertainty 105.
The second variant finds the root of the function in the Mfile specified
by the string g, with initial conditions 0 and 1, and uncertainty 105.
The third variant finds the root of the function in the Mfile specified
by the string g, with initial conditions specified by xcurr and xnew, and
uncertainty 105.
The fourth variant finds the root of the function in the Mfile specified
by the string g, with initial conditions specified by xcurr and xnew, and
uncertainty specified by uncert.
The next four variants returns the final value of the root as x.
The last four variants returns the final value of the root as x, and
the value of the function at the final value as v.
if nargin4
uncert105;
if nargin3
if nargin1
xcurr0;
xnew1;
elseif nargin0
36
gg; else
dispCannot have 2 arguments.;
return; end
end end
gcurrfevalg,xcurr;
while absxnewxcurrxcurruncert,
xoldxcurr;
xcurrxnew;
goldgcurr;
gcurrfevalg,xcurr;
xnewgcurrxoldgoldxcurrgcurrgold;
end while
print out solution and value of gx
if nargout1
xxnew;
if nargout2
vfevalg,xnew;
end
else
finalpointxnew
valuefevalg,xnew
end if
b. We get a solution of x0.0039671, with corresponding value gx9.908108. 7.11
function alphalinesearchsecantgrad,x,d
Line search using secant method
epsilon104; line search tolerance
max100; maximum number of iterations
alphacurr0;
alpha0.001;
dphizerofevalgrad,xd;
dphicurrdphizero;
i0;
while absdphicurrepsilonabsdphizero,
alphaoldalphacurr;
alphacurralpha;
dphiolddphicurr;
dphicurrfevalgrad,xalphacurrdd;
alphadphicurralphaolddphioldalphacurrdphicurrdphiold;
ii1;
if imaxabsdphicurrepsilonabsdphizero,
dispLine search terminating with number of iterations:;
dispi;
break;
end
end while
37
7.12
a. We could carry out the bracketing using the onedimensional function 0fx0d0, where d0 is the negative gradient at x0, as described in Section 7.8. The decision variable would be . However, here we will directly represent the points in R2 which is equivalent, though unnecessary in general.
The uncertainty interval is calculated by the following procedure:
Therefore,
fx1x 2 1x, rfx2 1x 212 12
drfx02 1 0.8 1.35
1 2 0.25
0.3
x1 x0 d 0.8 0.0751.35 0.69870.25 0.3 0.2725
Then, we proceed as follows to find the uncertainty interval:
fx1f0.6987 !0.3721
0.2725
x2 x0 2d 0.8 0.151.35 0.5975
0.25 0.3 0.2950 fx2f0.5975 !0.2678
0.2950
x3 x0 4d 0.8 0.31.35 0.3950
0.25 0.3 0.3400 fx3f0.3950 !0.1373
0.3400
x4x08d 0.8 0.6 1.350.0100
0.25 0.3 0.4300 fx4f 0.0100!0.1893
0.4300
Between fx3 and fx4 the function increases, which means that the minimizer must occur on the interval x2, x4 0.5975, 0.0100, with d1.35.
As the problem requires, we use 0.075. First, we begin calculating fx0 and x1:
0.2950 0.4300 MATLAB code to solve the problem is listed next.
0.3
Coded by David Schvartzman
fx0f0.8 !0.5025, 0.25
38
In our case we have:
Q2 1; 1 2;
x00.8; .25;
e0.075;
fzeros1,10;
Xzeros2,10;
x1x0;
dQx1;
f10.5x1Qx1;
for i2:10
X:,ix1ed;
fi0.5X:,iQX:,i;
e2e;
iffifi1
break; end
end
The interval is defined by:
aX:,i2;
bX:,i;
strsprintfThe minimizer is located in: a, b, where a.4f; .4f
and b.4f; .4f, a1,1, a2,1, b1,1, b2,1;
dispstr;
b. First, we determine the number of necessary iterations:
The initial uncertainty interval width is 0.6223. This width will be 0.62230.618N after N stages. We
choose N so that
We show the first iteration of the algorithm; the rest are analogous and shown in the following table.
From part a, we know that a0, b0x2, x4, then:
a0,b0 0.5975 ,0.0100, with fa00.2678, fb00.1893.
0.618N0.01N9 0.6223
0.2950 0.4300
a1a0b0a00.36550.3466T
b1 a0 1b0 a00.2220 0.3784T fa10.1270
fb10.1085
We can see fa1fb1, hence the uncertainty interval is reduced to: a1, b0 0.3655, 0.0100
0.3466 0.4300
So, calculating the norm of b0a1, we see that the uncertainty region width is now 0.38461.
39
Iteration
1
2
3
4
5
6
7
8
9
ak 0.3655
0.3466 0.2220
0.3784 0.2768
0.3663 0.2220
0.3784 0.2430
0.3738 0.2559
0.3709 0.2430
0.3738 0.2479
0.3727 0.2430
bk 0.2220
0.3784 0.1334
0.3981 0.2220
0.3784 0.1882
0.3860 0.2220
0.3784 0.2430
0.3738 0.2350
0.3756 0.2430
0.3738 0.2399
fak 0.1270
0.1085
0.1094
0.1085
0.1079
0.1081
0.1079
0.1080
0.1079
fbk 0.1085
0.1232
0.1085
0.1117
0.1085
0.1079
0.1080
0.1079
0.1079
New Uncertainty Interval
.3655 , 0.3466
.3655 , 0.3466
.2768 , 0.3663
.2768 , 0.3663
.2768 , 0.3663
0.01000.4300
.13340.3981
.13340.3981
.18820.3860
.22200.3784
0.3738
We can now see that the minimizer is located within0.2479 , 0.2399 , and its uncertainty
interval width is 0.00819.
Matlab code used to perform calculations is listed next
0.3745
0.2559 , 0.22200.3709 0.3784
0.2559 , 0.23500.3709 0.3756
0.2479 , 0.23500.3727 0.3756
0.2479 , 0.23990.3727 0.3745
Coded by David Schvartzman
To succesfully run this program, run
the previous script to obatin a and b.
e0.01;
Q2 1; 1 2;
ro0.53sqrt5;
First we determine the number of necessary iterations.
dnormab;Nceil logedlog1ro;
fa0.5aQa;
fb0.5bQb;
str1sprintfInitial values:;
str2sprintfa0.4f,.4f., a1, a2;
str3sprintfb0.4f,.4f., b1, b2;
str4sprintffa0.4f., fa;
str5sprintffb0.4f., fb;
40
0.3727 0.3745
strnsprintfn;
dispstrn;
dispstr1;dispstr2;
dispstr3;dispstr4;
dispstr5;
saroba;
ta1roba;
fs0.5sQs;
ft0.5tQt;
for i1:N
str1sprintfIteration number: d, i;
str2sprintfad.4f,.4f., i, s1, s2;
str3sprintfbd.4f,.4f., i, t1, t2;
str4sprintffad.4f., i, fs;
str5sprintffbd.4f., i, ft;
if ftfs
bt;
fbft;
ts;
ftfs;
saroba;
fs0.5sQs;
else
as;
fafs;
st;
fsft;
ta1roba;
ft0.5tQt;
end
str6sprintfNew uncertainty interval: ad.4f,.4f,
bd.4f,.4f., i, a1, a2, i, b1, b2;
dispstrn;
dispstr1
dispstr2
dispstr3
dispstr4
dispstr5
dispstr6
end
The interval where the minimizer is boxed in is given by:
ana;
bnb;
We can return anbn2 as the minimizer.
minanbn2;dispstrn;
strsprintfThe minimizer x is: .4f; .4f, min1,1, min2,1;
dispstr;
c. We need to determine the number of necessary iterations:
The initial uncertainty interval width is 0.6223. This width will be 0.6223 12 , where Fk is the kth
FN 1
element of the Fibonacci sequence. We choose N so that
120.010.0161FN112
FN 1 0.6223 0.0161 41
For 0.05, we require FN 168.32, thus F1089 is enough, and we have N101, 9 iterations. We show the first iteration of the algorithm; the rest are analogous and shown in the following table.
From part a, we know that a0, b0x2, x4, then:
a0,b0 0.5975 ,0.0100, with fa00.2678, fb00.1893.
0.2950 0.4300 RecallthatintheFibonaccimethod,1 1 FN
155 0.3820. 89
a1a01b0a00.36540.3466T
b1a011b0a00.22210.3784T fa10.1270
fb10.1085
We can see fa1fb1, hence the uncertainty interval is reduced to: a1, b0 0.3654, 0.0100
0.3466 0.4300
So, calculating the norm of b0a1, we see that the uncertainty region width is now 0.38458.
FN1
k k
1 0.3820
2 0.3818
3 0.3824
ak
0.3654
0.34660.2221
0.37840.2767
0.3663
bk
0.2221
0.37840.1333
0.39810.2221
0.3784
fak 0.1270
0.1085
0.1094
fbk 0.1085
0.1232
0.1085
New Uncertainty Interval0.3654, 0.0100
0.3466 0.43000.3654 , 0.1333
0.3466 0.39810.2767 , 0.1333
0.3663 0.3981
42
k k ak bk fak
fbk 0.1118
0.1085
0.1079
0.1080
0.1079
0.1079
New Uncertainty Interval
4 0.3810
5 0.3846
6 0.3750
7 0.4000
8 0.3333
9 0.4500
0.22210.3784
0.24260.3739
0.25620.3708
0.24260.3739
0.24940.3724
0.18790.3860
0.22210.3784
0.24260.3739
0.23570.3754
0.24260.3739
0.1085
0.1079
0.1082
0.1079
0.1080
0.1079
0.2767,0.3663
0.2767,0.3663
0.2562,0.3708
0.2562,0.3708
0.2494,0.3724
0.18790.3860
0.22210.3784
0.22210.3784
0.23570.3754
0.23570.3754
0.24260.3739
0.24190.3740
0.2494,0.2419
0.3724
We can now see that the minimizer is located within0.2494 , 0.2419 , and its uncertainty
0.3740
interval width is 0.00769. 0.3724 0.3740
Matlab copde used to perform calculations is listed next.
Coded by David Schvartzman
To succesfully run this program, run the first of the above scripts
to obtain a and b.
We take
e0.01;
Q2 1; 1 2;
First determine the number of necessary iterations.
dnormab;
FN1 20.051ed;
Fzeros1,20;
F10;
F21;
for i1:20
Fi1FiFi1;
ifFi2 FN1
break; end
end
Ni1;
rozeros1, N1;
for i1:N
roi1FN3iFN4i;
end
roNroN0.05;
fa0.5aQa;
fb0.5bQb;
43
str1sprintfInitial values:;
str2sprintfa0.4f,.4f., a1, a2;
str3sprintfb0.4f,.4f., b1, b2;
str4sprintffa0.4f., fa;
str5sprintffb0.4f., fb;
strnsprintfn;
dispstrn;
dispstr1;
dispstr2;
dispstr3;
dispstr4;
dispstr5;
saro1ba;
ta1ro1ba;
fs0.5sQs;
ft0.5tQt;
for i1:N
str1sprintfIteration number: d, i;
str2sprintfad.4f,.4f., i, s1, s2;
str3sprintfbd.4f,.4f., i, t1, t2;
str4sprintffad.4f., i, fs;
str5sprintffbd.4f., i, ft;
if ftfs
bt;
ts;
ftfs;
saroi1ba;
fs0.5sQs;
else
as;
st;
fsft;
ta1roi1ba;
ft0.5tQt;
end
str6sprintfNew uncertainty interval: ad.4f,.4f,
bd.4f,.4f., i, a1, a2, i, b1, b2;
str7sprintfUncertainty interval width: .5f, normab;
dispstrn;
dispstr1
dispstr2
dispstr3
dispstr4
dispstr5
dispstr6
dispstr7
end
The minimizer is boxed in the interval:
ana;
bnb;
We can return anbn2 as the minimizer.
minanbn2;
dispstrn;
strsprintfThe minimizer x is: .4f; .4f, min1,1, min2,1;
44
dispstr;
8. Gradient Methods
8.1
The function f is a quadratic and so we can represent it in standard form as f1x1 0xx131xQxxbc.
202 12 2
The first iteration isx1x00rf x0 .
To find x1, we need to compute rf x0g0. We have The step size, 0, can be computed as
g0Qx0bh1 12i.
Hence,
The second iteration is where
and Hence,
0g0g05. g0Qg0 6
x1 0g0 5 1 56. 6 12 512
x2x11rf x1 , rfx1g1 Qx1 b 16 ,
13 56 516 25
1g1g1 g1Qg1
5. 9
x2x11g1
The optimal solution is x1, 14 obtaind by solving the equation Qxb.
8.2
Let s be the order of convergence of xk. Suppose there exists c0 such that for all k suciently large, kxk1xkckxkxkp.
27 . 512 9 1325
108
Hence, for all k suciently large,
kxk1xk
Taking limits yields
kxk1xk 1
kxkxkp kxkxksp
kxkxks
lim kxk1xk c .
c. kxkxksp
k!1 kxkxks limk!1 kxkxksp 45
Since by definition s is the order of convergence,
lim kxk1xk1.
k!1 kxkxks Combining the above two inequalities, we get
c1. limk!1 kxkxksp
Therefore, since limk!1 kxkxk0, we conclude that sp, i.e., the order of convergence is at most p. 8.3
We use contradiction. Suppose xk ! x and
lim kxk1xk0
k!1 kxkxkp
for some p1. We may assume that xk 6 x for an infinite number of k for otherwise, by convention,
the ratio above is eventually 0. Fix 0. Then, there exists K1 such that for all kK1, kxk1xk.
kxkxkp Dividing both sides by kxkxk1p, we obtain
kxk1 xk . kxkxk kxkxk1p
Because xk ! x and p1, we have kxk xk1p ! 0. Hence, there exists K2 such that for all kK2, kxkxk1p. Combining this inequality with the previous one yields
kxk1xk1 kxkxk
8.4
for all kmaxK1, K2; i.e.,
which contradicts the assumption that xk ! x.
kxk1xkkxkxk,
a. The sequence converges to 0, because the exponent 2k2 grows unboundedly negative as k ! 1.
b. The order of convergence of xk is 1. To see this, we first write, for p1, xk122k12
22k2 p
22k2 2k1 p2k2
22k2 22k1p.
But notice that the exponent 2k2 22k1p grows unboundedly negative as k ! 1, regardless of the value
of p. Therefore, for any p,
which means that the order of convergence is 1.
xkp
lim xk10, k!1 xkp
46
8.5
a. We have
ak x0 . Because0a1,wehaveak !0,andhencexk !0.
b. Similarly, we have
xkaxk1
aaxk2
a2xk2 .
ykyk1b
yk2bb
yk2b2 .
y0bk . Becausey01andb1,wehavebk !1andhenceyk !0.
c. The order of convergence of xk is 1 because
lim xk1lim aa,
lim yk1lim 11, k!1 ykb k!1
d. Suppose xkcx0. Using part a, we have akc, which implies that klogcloga. So the smallest number of iterations k such that xkcx0 is dlogc logae the smallest integer not smaller than logc loga.
e. Suppose ykcy0. Using part b, we have y0bkcy0. Taking logs twice and rearranging, we
k!1 xk k!1 The order of convergence of yk is b because
and 0a1. and 011.
have
Denote the righthand side by z. So the smallest number of iterations k such that ykcy0 is dze.
f. Comparing the answer in part e with that of part d, we can see that as c ! 0, the answer in part d is logc, whereas the answer in part e is Ologlogc. Hence, in the regime where c is very small, the number of iterations in part d linear convergence is at least exponentially larger than that in part e superlinear convergence.
8.6
k 1 log1 logc . logb logy0
We have uk11uk, and uk ! 0. Therefore,
lim uk1 10
k!1 ukand thus the order of convergence is 1.
47
8.7
a. The value of x in terms of a, b, and c that minimizes f is xba.
b. We have f0xaxb. Therefore, the recursive equation for the DDS algorithm is
xk1xkaxkb1axkb.
c. Let x limk!1 xk. Taking limits of both sides of xk1xkaxkb from part b, we get
x x a x b. d. To find the order of convergence, we compute
Hence, we get x bax.
1axkbba xkbap
Let zk1axkba1p. Note that zk converges to a finite nonzero number if and only if p1 if p1, then zk ! 0, and if p1, then zk ! 1. Therefore, the order of convergence of xk is 1,
e. Let ykxkba. From part d, after some manipulation we obtain yk11ayk1ak1y0.
The sequence xk converges to ba if and only if yk ! 0. This holds if and only if 1a1, which is equivalent to 02a.
8.8
We rewrite f as f x1 x Qxb x, where 2
Q6 4 46
The characteristic polynomial of Q is 21220. Hence, the eigenvalues of Q are 2 and 10. Therefore, the largest range of values offor which the algorithm is globally convergent is 0210.
8.9
a. WecanwritehxQxb,whereb4,1 and Q3 2
xk1ba xkbap
1axk1aba xkbap
1axkba1p.
23
Q1b13 242.
5231 1
b. By part a, the algorithm is a fixedstepsize gradient algorithm for a problem with gradient h. The eigenvalues of Q are 1 and 5. Hence, the largest range of values ofsuch that the algorithm is globally convergent to the solution is 025.
c. The eigenvectors of Q corresponding to eigenvalue 5 has the form c1, 1, where c 2 R. Hence, to violate the descent property, we pick 1 3
x0Q1bc1 0 48
is positive definite. Hence, the solution is
where we choose c1 so that x0 has the specified form. 8.10
a. We have
fx 1x 3 1axx1b. 21a31
b. The unique global minimizer exists if and only if the Hessian is positive definite, which holds if and only if 1a29 by Sylvesters criterion. Hence, the largest set of values of a and b such that the global minimizer of f exists is given by 4a2 and b 2 R unrestricted.
The minimizer is given by
x13 1a1 31a 1 1 1
91a2 1a 3 1 91a2 1 4a 1
c. The algorithm is a gradient algorithm with fixed step size 25. The eigenvalues of the Hessian are after some calculations 4a and 2a. For global convergence, we need 252max, or max5, where maxmax4a, 2a. From this we deduce that 3a1. Hence, the largest set of values of a and b such that the algorithm is globally convergent is given by 3a1 and b 2 R unrestricted.
8.11
a. We have
b. We have xk ! c if and only if fxk ! 0. HQence, the algorithm is globally conveQrgent if and only if
8.12 p p
The only local minimizer of f is x1 3. Indeed, we have f0x0 and f00x2 3. To find the largest range of values ofsuch that the algorithm is locally convergent, we use a linearization argument: The algorithm is locally convergent if and only if the linearized algorithm xk1xkf00xxkx is globally convergent. But the linearized algorithm is just a fixed step size algorithm applied to a quadratic with second derivative f00x. Therefore, the largest range of values ofsuch that the algorithm is locally convergent is 02f00x1p3.
8.13
We use the formula from Lemma 8.1:
fxk11kfxk
we have Vf in this case. Using the expression for k, we get, assuming xk 6 1,
k 42k12k.
Hence, k0, which means that fxk1fxk if xk 6 1 for k0. This implies that the algorithm has the descent property for k0.
fxk1
xk1c22
xkkxkcc22
1k2xkc22
1k2fxk.
fxk ! 0 for any x0. From part a, we deduce that fxk ! 0 for any x0 if and only if Because 01, this condition is equivalent to 1k01k0, which holds if and only if
X1k1 . k0
1k01k20.
49
We also note that
X1 k4 X1 2kX1 4k!424infty. k0 k0 k0 3
Since k0 for all k0, we can apply the theorem given in class to deduce that the algorithm is not globally convergent.
8.14
We have
By Taylors Theorem,
xk1xxkxf0xkf00x.
f0xkf0xf00xxkxOxkx2. Since f0x0 by the FONC, we get
xkxf0xkf00xOxkx2. Combining the above with the first equation, we get
xk1xOxkx2, which implies that the order of convergence is at least 2.
8.15
a. The objective function is a quadratic that can be written as
fxaxbaxbkak2x22abxkbk2.
Hence, the minimizer is xabkak2.
b. Note that f00x2kak2. Thus, by the result for fixed step size gradient algorithms, the required largest
range foris 0, 1kak2. 8.16
a. We have
fxkAxbk2AxbAxb
xAbAxb
xAAx2Abxbb
which is a quadratic function. The gradient is given by rfx2AAx2Ab and the Hessian is
given by F x2AA.
b. The fixed step size gradient algorithm for solving the above optimization problem is given by
xk1xk2AAxk2Abxk2AAxkb.
c. The largest range of values forsuch that the algorithm in part b converges to the solution of the problem
is given by
8.17
021. max 2A A 4
a. We use contraposition. Suppose an eigenvalue of A is negative: Avv, where 0 and v is a corresponding eigenvector. Choose x0vx. Then,
x1 vx AvAx bvx v, 50
and hence
x1x1x0x. Since 11, we conclude that the algorithm is not globally monotone.
b. Note that the algorithm is identical to a fixed step size gradient algorithm applied to a quadratic with Hessian A. The eigenvalues of A are 1 and 5. Therefore, the largest range of values offor which the algorithm is globally convergent is 025.
8.18
The steepest descent algorithm applied to the quadratic function f has the form gkgk
xk1xkkgkxkgkQgk gk.
: If x1Q1b, then Rearranging the above yields
Since g0Qx0b 6 0, we have
Q1bx00g0.
Qx0b0Qg0.
Qg01 g0 0
which means that g0 is an eigenvector of Q with corresponding eigenvalue 10.
: By assumption, Qg0g0, where2 R. We want to show that Qx1b. We have
Qx1
g0g0Qx0 g0
g0Qg0
1 g0g0
Qx0
Qx0g0
b.
8.19
g0g0
Qg0
a. Possible. Pick f such that max2min and x0 such that g0 is an eigenvector of Q with eigenvalue
min. Then,
b. Not possible. Indeed, using Rayleighs inequality,
0g0g012 .
g0Qg0 min g0g0
max
1
.
Q3 2, b3. 231
0g0Qg0 fx 1xQxbx22,
8.20
a. We rewrite f as where
2
min
The eigenvalues of Q are 1 and 5. Therefore, the range of values of the step size for which the algorithm converges to the minimizer is 025.
51
b. An eigenvector of Q corresponding to the eigenvalue 5 is v1, 15. We have xQ1b11, 95. Hence, an initial condition that results in the algorithm diverging is
x0xv2 . 2
8.21
In both cases, we compute the Hessian Q of f, and find its largest eigenvalue max. Then the range we seek is 02max.
a. In this case,
with eigenvalues 2 and 10. Hence, the answer is 015.
Q6 4, 46
b. In this case, again we have
with eigenvalues 2 and 10. Hence, the answer is 015.
8.22
For the given algorithm we have
k 2 gkgk2
gkQgkgkQ1gk If 02, then 20, and by Lemma 8.2,
k 2minQ0 max Q
which implies that P1k0 k1. Hence, by Theorem 8.1, xk ! x for any x0. If 0 or 2, then 20, and by Lemma 8.2,
k 2maxQ0. min Q
By Lemma 8.1, VxkVx0. Hence, if x0 6 x, then Vxk does not converge to 0, and consequently xk does not converge to x.
8.23
By Lemma 8.1, V xk11kV xk for all k. Note that the algorithm has a descent property if and only if V xk1V xk whenever gk 6 0. Clearly, whenever gk 6 0, V xk1V xk if and only if 1k1. The desired result follows immediately.
Q6 4, 46
8.24
We have and hence
xk1xkkdk
hxk1 xk,rfxk1ikhdk,rfxk1i.
Now, let kfxkdk. Since k minimizes k, then by the FONC, 0kk0. By the chain rule, 0kdkrfxkdk. Hence,
00kkdkrfxk kdkhdk,rfxk1i, 52
and so
hxk1 xk,rfxk1i0.
A simple MATLAB routine for implementing the steepest descent method is as follows.
function x,Nsteepdescgrad,xnew,options;
STEEPDESCgrad,x0;
STEEPDESCgrad,x0,OPTIONS;
xSTEEPDESCgrad,x0;
xSTEEPDESCgrad,x0,OPTIONS;
x,NSTEEPDESCgrad,x0;
x,NSTEEPDESCgrad,x0,OPTIONS;
The first variant finds the minimizer of a function whose gradient
is described in grad usually an Mfile: grad.m, using a gradient
descent algorithm with initial point x0. The line search used in the
secant method.
The second variant allows a vector of optional parameters to
defined. OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results, default is no display: 0.
OPTIONS2 is a measure of the precision required for the final point.
OPTIONS3 is a measure of the precision required of the gradient.
OPTIONS14 is the maximum number of iterations.
For more information type HELP FOPTIONS.
The next two variants returns the value of the final point.
The last two variants returns a vector of the final point and the
number of iterations.
if nargin3
options;
if nargin2
dispWrong number of arguments.;
return; end
end
if lengthoptions14
if options140
options141000lengthxnew;
end
else
options141000lengthxnew;
end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
epsilonxoptions2;
epsilongoptions3;
maxiteroptions14;
8.25
for k1:maxiter,
53
xcurrxnew;
gcurrfevalgrad,xcurr;
if normgcurrepsilong
dispTerminating: Norm of gradient less than;
dispepsilong;
kk1;
break;
end if
alphalinesearchsecantgrad,xcurr,gcurr;
xnewxcurralphagcurr;
if print,
dispIteration number k
dispk;print iteration index k
dispalpha ;
dispalpha;print alpha
dispGradient;
dispgcurr; print gradient
dispNew point ;
dispxnew; print new point
end if
if normxnewxcurrepsilonxnormxcurr
dispTerminating: Norm of difference between iterates less than;
dispepsilonx;
break;
end if
if kmaxiter
dispTerminating with maximum number of iterations;
end if
end for
if nargout1
xxnew;
if nargout2
Nk;
end else
dispFinal point ;
dispxnew;
dispNumber of iterations ;
dispk;
end if
To apply the above MATLAB routine to the function in Example 8.1, we need the following Mfile to specify the gradient.
function ygx
y4x14.3; 223; 1635.3;
We applied the algorithm as follows:
options2106;
options3106;
54
steepdescg,4;5;1,options
Terminating: Norm of gradient less than
1.0000e06
Final point
4.0022e00 3.0000e004.9962e00
Number of iterations
25
As we can see above, we obtained the final point 4.002, 3.000, 4.996 after 25 iterations. The value of the objective function at the final point is 7.21010.
8.26
The algorithm terminated after 9127 iterations. The final point was 0.99992,0.99983. 9. Newtons Method
9.1
a. We have f0x4xx03 and f00x12xx02. Hence, Newtons method is represented as xk1xkxkx0 ,
which upon rewriting becomes
3
xk1x02 xkx0 3
b. From part a, ykxkx023xk1x023yk1.
c. From part b, we see that yk23ky0 and therefore yk ! 0. Hence xk ! x0 for any x0.
d. From part b, we have
lim xk1x0 lim 220 k!1 xk x0 k!1 3 3
and hence the order of convergence is 1.
e. The theorem assumes that f00x 6 0. However, in this problem, xx0, and f00x0.
9.2
a. We have
xk1xxkxkf0xk. By Taylors theorem applied to f0,
f0xkf0xf00xxkxoxkx. Since f0x0 by the FONC, we get
xkxkf0xk1kf00xxkxkoxkxoxkxkoxkx
1koxkx.
Because k converges, it is bounded, and so 1koxkxoxkx. Combining the above
with the first equation, we get
which implies that the order of convergence is superlinear.
xk1xoxkx,
b. In the secant algorithm, if xk ! x, then f0xkf0xk1xkxk1 ! f00x. Since the
secant algorithm has the form xk1xk kf0xk with kxk xk1f0xkf0xk1, we 55
deduce that k ! 1f00x. Hence, if we apply the secant algorithm to a function f 2 C2, and it converges to a local minimizer x such that f00x 6 0, then the order of convergence is superlinear.
9.3
a. We compute f 0 x413 3 and f 00 x423 9. Therefore Newtons algorithm for this problem
takes the form
4xk133
xk1xk4xk2392xk.
b. From part a, we have xk2kx0. Therefore, as long as x0 6 0, the sequence xk does not converge to 0.
9.4
a. Clearly fx0 for all x. We have
fx0 , x2 x21 0 and 11 0
, x1,1.
Hence, fxf1,1 for all x 6 1,1, and therefore 1,1 is the unique global minimizer.
b. We compute
rfx40031400x1x2212
2002 x21 12002140022
4001
Fx11200 4001 .
8000021×2400 4001 12002140022
Applying two iterations of Newtons method, we have x11,0, x21,1. Therefore, in this particular case, the method converges in two steps! We emphasize, however, that this fortuitous situation is by no means typical, and is highly dependent on the initial condition.
c. Applying the gradient algorithm xk1xkkrfxk with a fixed step size of k0.05, we obtain x10.1, 0, x20.17, 0.1.
9.5
If x0x, we are done. So, assume x0 6 x. Since the standard Newtons method reaches the point x in one step, we have
fxfx0Q1g0minfx
0
F x
To apply Newtons method we use the inverse of the Hessian, which is
4001 . 200
fx0Q1g0
0 argminfx0 Q1g01.
for any 0. Hence,
Hence, in this case, the modified Newtons algorithm is equivalent to the standard Newtons algorithm, and
thus x1x.
10. Conjugate Direction Methods
56
10.1
We proceed by induction to show that for k0,,n1, the set d0,,dk is Qconjugate. We assume that di 6 0, i1, . . . , k, so that diQdi 6 0 and the algorithm is well defined.
For k0, the statement trivially holds. So, assume that the statement is true for kn1, i.e., d0, . . . , dk is Qconjugate. We now show that d0, . . . , dk1 is Qconjugate. For this, we need only to show that for each j0,,k, we have dk1Qdj0. To this end,
k1 jk1 Xk pk1 Qdi i ! jd Qdp i0 diQdi d Qd
k1 jXk pk1 Qdi i j p Qd i0 diQdi d Qd .
In the above, we have assumed that the vectors dk are nonzero so that dkQdk 6 0 and the algorithm is well defined. To prove that this assumption holds, we use induction to show that dk is a nonzero linear combination of p0,,pk which immediately implies that dk is nonzero because of the linear independence of p0, . . . , pk.
P For k0, we have d0p0 by definition. Assume that the result holds for kn1; i.e., dk
By the induction hypothesis, diQdj0 for i 6 j. Therefore
k1 j k1 j pk1Qdj j j
d Qd p QddjQdj d Qd 0.
k kpj, where the coecients k are not all zero. Consider dk1: j0 j j
Xk i0
Xk i0
j0 ij
So, clearly dk1 is a nonzero linear combination of p0, . . . , pk1.
10.2
Let k 2 0, . . . , n1 and kfxkdk. By the chain rule, we have 0krfxkkdkdkgk1dk.
dk1
pk1
idi
Xi j0
pk1 pk1
i Xk Xk
kpj j
ikpj. j
Since gk1dk0, we have 0k0. Note that
k1 dkQdk 2gkdkconstant.
2
Asis a quadratic function ofwith positive coecient in the quadratic term, we conclude that k
argmin fxk dk.
Note that since gkdk 6 0 is the coecient of the linear term in k, we have k 6 0. For i 2
0,,k1, we have
1 xk1xkQdi k
1 gk1gkdi k
1 gk1digkdi k
0
57
dkQdi
by assumption, which completes the proof.
10.3
From the conjugate gradient algorithm we have
k k gkQdk1 k1 dgdk1Qdk1 d .
Premultiplying the above by dkQ and using the fact that dk and dk1 are Qconjugate, yields
10.4
k k k k gkQdk1 k
d Qdd Qgdk1Qdk1 d Qd
dkQgk.
k1
a. Since Q is symmetric, then there exists a set of vectors d1, . . . , dn such that Qdiidi, i1,,n, and didj0, j 6 i, where the i are real eigenvalues of Q. Therefore, if i 6 j, we have diQdjdijdjjdidj0. Hence the set d1,,dn is Qconjugate.
b. Define idiQdididi. Let
26 d1 37
D64 . 75. dn
Since Q is positive definite and the set d1, . . . , dn is Qconjugate, then by Lemma 10.1, the set is also linearly independent. Hence, D is nonsingular. By Qconjugacy, we have that for all i 6 j, diQdj0. By assumption, we have dijdjjdidj0. Hence, diQdjjdidj. Moreover, for each i1,,n, we have diQdidiidiididi. We can write the above conditions in matrix form:
Since D is nonsingular, then we have which completes the proof.
10.5
DQdiDidi. Qdiidi,
We have
Hence, in order to have dkQdk10, we need
dkQdk1kdkQgk1dkQdk. dkQdk
10.6
We use induction. For k0, we have
kdkQgk1 . d0a0g0a0b 2 V1.
Moreover, x00 2 V0. Hence, the proposition is true at k0. Assume it is true at k. To show that it is
also true at k1, note first that
xk1xkkdk. 58
Because xk 2 VkVk1 and dk 2 Vk1 by the induction hypothesis, we deduce that xk1 2 Vk1. Moreover,
dk1akgk1bkdk
akQxk1bbkdk.
But because xk1 2 Vk1, Qxk1b 2 Vk2. Moreover, dk 2 Vk1Vk2. Hence, dk1 2 Vk2. This completes the induction proof.
b. The conjugate gradient algorithm is an instance of the algorithm given in the question. By the expanding subspace theorem, we can say that in the conjugate gradient algorithm with x00, at each k, xk is the global minimizer of f on the Krylov subspace Vk. Note that for all kn, Vk1Vk, because of the CayleyHamilton theorem, which allows us to express Qn as a linear combination of I , Q, . . . , Qn1 .
10.7
Expanding a yields
a10DaQx0Dax0Dab
21a DQD aa DQx0 Db10 Qx0 x0 b .
22
Clearlyis a quadratic function on Rr. It remains to show that the matrix in the quadratic term, DQD,
is positive definite. Since Q0, for any a 2 Rr, we have
a DQD aDaQDa0
and
a D QD aDa QDa0
ifandonlyifDa0. SincerankDr,Da0ifandonlyifa0. Hence,thematrixDQDispositive definite.
10.8
a. Let0kn1and0ik. Then,
gk1T gigk1T i1di1di
i1gk1T di1gk1T di 0
by Lemma 10.2.
b. Let0kn1. and0ik1. Then,
gk1T Qgi
kdkdk1T Qi1di1di
ki1dkT Qdi1kdkT Qdii1dk1T Qdi1dk1T Qdi 0
by Qconjugacy of dk1, dk, di and di1 note that the iteration indices here are all distinct.
10.9
We represent f as
fx1x5 3xx07. 232 1
59
The conjugate gradient algorithm is based on the following formulas:
xk1 k1
xkkdk, k gkdk dkQdk
k gk1Qdk kd , k dkQdk .
d
g
d0 g0 Qx0 bb 0 .
k1
h0 1i0
g0d0 1 1
0d0Qd0h0 1i 5 3 0 2. 3 2 1
x1x00d00100. 0 21 12
We have,
We then proceed to compute
1
Hence,
We next proceed by evaluating the gradient of the objective function at x1,
g1 Qx1 b 5 3 0 032.
3 2 12 1 0
Because the gradient is nonzero, we can proceed with the next step where we compute
h32 0i 5 3 0g1Qd0 3 2 1 9
Hence, the direction d1 is
d1g10d03290 32 . 0 4 1 94
It is easy to verify that the directions d0 and d1 are Qconjugate. Indeed, d0Qd1h0 1i5 3 320.
0d0Qd0h0 1i 5 3 04. 3 2 1
10.10
a. We have f x1 x Qxb x where 2
3 2 94 Q5 2 , b3 .
211 60
b. Since f is a quadratic function on R2, we need to perform only two iterations. For the first iteration we compute
For the second iteration we compute
d0g0 3,1
5
29
0.51724, 0.17241 0.06897, 0.20690.
0x1g1
0 d1
1 x2
0.0047534
0.08324, 0.202145.7952
1.000, 1.000.
c. The minimizer is given by xQ1 b1, 1 , which agrees with part b. 10.11
A MATLAB routine for the conjugate gradient algorithm with options for dierent formulas of k is:
function x,Nconjgradgrad,xnew,options;
CONJGRADgrad,x0;
CONJGRADgrad,x0,OPTIONS;
xCONJGRADgrad,x0;
xCONJGRADgrad,x0,OPTIONS;
x,NCONJGRADgrad,x0;
x,NCONJGRADgrad,x0,OPTIONS;
The first variant finds the minimizer of a function whose gradient
is described in grad usually an Mfile: grad.m, using initial point
x0.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results, default is no display: 0.
OPTIONS2 is a measure of the precision required for the final point.
OPTIONS3 is a measure of the precision required of the gradient.
OPTIONS5 specifies the formula for beta:
0Powell;
1FletcherReeves;
2PolakRibiere;
3HestenesStiefel.
OPTIONS14 is the maximum number of iterations.
For more information type HELP FOPTIONS.
The next two variants return the value of the final point.
The last two variants return a vector of the final point and the
number of iterations.
if nargin3
options;
if nargin2
dispWrong number of arguments.;
return; end
61
end
numvarslengthxnew;
if lengthoptions14
if options140
options141000numvars;
end else
options141000numvars;
end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
epsilonxoptions2;
epsilongoptions3;
maxiteroptions14;
gcurrfevalgrad,xnew;
if normgcurrepsilong
dispTerminating: Norm of initial gradient less than;
dispepsilong;
return;
end if
dgcurr;
resetcnt0;
for k1:maxiter,
xcurrxnew;
alphalinesearchsecantgrad,xcurr,d;
alphadgcurrdQd;
xnewxcurralphad;
if print,
dispIteration number k
dispk;print iteration index k
dispalpha ;
dispalpha;print alpha
dispGradient;
dispgcurr; print gradient
dispNew point ;
dispxnew; print new point
end if
if normxnewxcurrepsilonxnormxcurr
dispTerminating: Norm of difference between iterates less than;
dispepsilonx;
break;
end if
goldgcurr;
gcurrfevalgrad,xnew;
if normgcurrepsilong
dispTerminating: Norm of gradient less than;
62
dispepsilong;
break;
end if
resetcntresetcnt1;
if resetcnt3numvars
dgcurr;
resetcnt0;
else
if options50 Powell
betamax0,gcurrgcurrgoldgoldgold;
elseif options51 FletcherReeves
betagcurrgcurrgoldgold;
elseif options52 PolakRibiere
betagcurrgcurrgoldgoldgold;
else HestenesStiefel
betagcurrgcurrgolddgcurrgold;
end if
dgcurrbetad;
end
if print,
dispNew beta ;
dispbeta;
dispNew d ;
dispd;
end
if kmaxiter
dispTerminating with maximum number of iterations;
end if
end for
if nargout1
xxnew;
if nargout2
Nk;
end else
dispFinal point ;
dispxnew;
dispNumber of iterations ;
dispk;
end if
We created the following Mfile, g.m, for the gradient of Rosenbrocks function: function ygx
y400x2x1.2x121x1, 200x2x1.2;
We tested the above routine as follows:
options2107;
options3107;
options14100;
options50;
conjgradg,2;2,options;
Terminating: Norm of difference between iterates less than
63
1.0000e07
FinalPoint
1.0000e00 1.0000e00
Numberofiteration
8
options51;
conjgradg,2;2,options;
Terminating: Norm of difference between iterates less than
1.0000e07
FinalPoint
1.0000e00 1.0000e00
Numberofiteration
10
options52;
conjgradg,2;2,options;
Terminating: Norm of difference between iterates less than
1.0000e07
FinalPoint
1.0000e00 1.0000e00
Numberofiteration
8
options53;
conjgradg,2;2,options;
Terminating: Norm of difference between iterates less than
1.0000e07
FinalPoint
1.0000e00 1.0000e00
Numberofiteration
8
The reader is cautioned not to draw any conclusions about the superiority or inferiority of any of the formulas for k based only on the above single numerical experiment.
11. QuasiNewton Methods
11.1
a. Let
Then, using the chain rule, we obtain
Hence
Since 0 is continuous, then, if dkgk0, there exists0 such that for all2 0, , 0,
i.e., fxkdkfxk.
b. By part a, 0 for all2 0,. Hence,
which implies that k0.
fxkdk. 0dkrfxkdk.
00dkgk.
kargmin 6 0 0
64
c. Now,
dkgk1dkrfxkkdk0kk.
Since karg min0 fxkdk0, we have 0kk0. Hence, gk1dk0. d.
i. We have dkgk. Hence, dkgkkgkk2. If gk 6 0, then kgkk20, and hence dkgk0.
ii. We have dkFxk1gk. Since Fxk0, we also have Fxk10. Therefore, dkgkgkF xk1gk0 if gk 6 0.
iii. We have Hence,
dkgkk1dk1. dkgkkgkk2k1dk1gk.
By part c, dk1gk0. Hence, if gk 6 0, then kgkk20, and dkgkkgkk20.
iv. We have dkHkgk. Therefore, if Hk0 and gk 6 0, then dkgkgkHkgk0.
e. Using the equation rf xQxb, we get
dkgk1dkQxk1b
dkQxkkdkb
kdkQdkdkQxkbkdkQdkdkgk.
By part c, dkgk10, which implies
kdkQdk .
11.2
Yes, because:
1. The search direction is of the form dkHkrfxk for matrix HkFxk1;
2. The matrix HkFxk1 is symmetric for f 2 C2;
3. If f is quadratic, then the quasiNewton condition is satisfied: Hk1gixi, 0ik. To see this, note that if the Hessian is Q, then Qxigi. Multiplying both sides by HkQ1, we obtain the desired result.
11.3
a. We have
Using the chain rule, we obtain
d f xkdkxkdk Qdkdkb. d
dkgk
f xkdk1 xkdk Q xkdkxkdk bc. 2
65
Equating the above to zero and solving forgives
xkQb dkdkQdk.
Taking into account that gkxkQb and that dkQdk0 for gk 6 0, we obtain gkdk gkHkgk
kdkQdkdkQdk .
b. The matrix Q is symmetric and positive definite; hence k0 if HkHk0.
11.4
a. The appropriate choice is HF x1. To show this, we can apply the same argument as in the proof of the theorem on the convergence of Newtons method. We wont repeat it here.
b. Yes provided we incorporate the usual step size. Indeed, if we apply the algorithm with the choice of H in part a, then when applied to a quadratic with Hessian Q, the algorithm uses HQ1, which definitely satisfies the quasiNewton condition. In fact, the algorithm then behaves just like Newtons algorithm.
11.5
Our objective is to minimize the quadratic
fx 1xQxxbc. 2
We first compute the gradient rf and evaluate it at x0, rfx0g0 Qx0 b1.
1
It is a nonzero vector, so we proceed with the first iteration. Let H0I2. Then,
The step size 0 is
Hence,
d0 H0g01 . 1
h1 1i1 g0d0 1 2
0d0Qd0h1 1i1 0 1 3. 0 2 1
x1 x0 0d023 . 23
We evaluate the gradient rf and evaluate it at x1 to obtain
rfx1g1 Qx1 b1 0 231 13.
0223 1 13
It is a nonzero vector, so we proceed with the second iteration. We compute H1, where
x0H0g0 x0H0g0 H1H0g0 x0H0g0 .
66
To find H1 we need to compute,
x0 x1 x023and g0 g1 g023 .
Using the above, we determine,
Then, we obtain
23
x0H0g0 x0H0g0
23 x0H0g0 0
43 g0 x0H0g08.
9
and
and
We next compute Therefore,
H1H0g0 x0H0g0 0 0
1 00 49 0 189
10 0 12
d1H1g113 . 16
1 g1d11. d1 Qd1
x2xx11d1 1 .
Note that g2Qx2b0 as expected.
11.6
12
We are guaranteed that the step size satisfies k0 if the search direction is in the descent direction, i.e., the search direction dkMkrfxk has strictly positive inner product with rfxk see Exercise 11.1. Thus, the condition on Mk that guarantees k0 is rfxkMkrfxk0, which corresponds to 1a0, or a1. Note that if a1, the search direction is not in the descent direction, and thus we cannot guarantee that k0.
11.7
Let x 2 Rn. Then xHk1x
!
xHkxx xkHkgkxkHkgk x gkxkHkgk
xxkHkgk2 HkxgkxkHkgk.
The complement of the Rank One update equation is
Bk1BkgkBkxkgkBkxk . xkgkBkxk
x
Note that since Hk0, we have xHkx0. Hence, if gkxkHkgk0, then xHk1x0.
11.8
67
Using the matrix inverse formula, we get
B1 k1
B1 k
B1gkBkxkgkBkxkB1 kk
xkgkBkxkgkBkxkB1gkBkxk k
xkB1gkxkB1gk B1k k.
k gkxkB1gk k
Substituting Hk for B1, we get a formula identical to the Rank One update equation. This should not k
be surprising, since there is only one update equation involving a rank one correction that satisfies the quasiNewton condition.
11.9
We first compute the gradient rf and evaluate it at x0, rfx0g0 Qx0 b1.
1
It is a nonzero vector, so we proceed with the first iteration. Let H0I2. Then,
The step size 0 is
d0 H0g01 . 1
h1 1i1 g0d0 1 2
0d0Qd0h1 1i1 0 1 3. 0 2 1
Hence,
We evaluate the gradient rf and evaluate it at x1 to obtain
To find H1 we need
Using the above, we determine,
23 x0H0g0 0
43
g0 x0H0g08. 9
x1 x0 0d023 . 23
rfx1g1 Qx1 b1 0 231 13. 0223 1 13
It is a nonzero vector, so we proceed with the second iteration. We compute H1, where x0H0g0 x0H0g0
H1H0g0 x0H0g0 .
x0 x1 x023and g0 g1 g023 .
23
and
68
Then, we obtain
0 01 00 49
0 189 10
16
The calculations are similar until we get to the second step:
H112 12
12 12 d00.
So the algorithm gets stuck at this point, which illustrates that it doesnt work.
11.11
a. Since f is quadratic, and karg min0 fxkdk, then gkdk
kdkQdk .
b. Now, dkHkgk, where HkHk0. Substituting this into the formula for k in part a, yields
gkHkgk
kdkQdk0.
11.12
Our solution to this problem is based on a solution that was furnished to us by Michael Mera, a student in ECE 580 at Purdue in Spring 2005. To proceed, we recall the formula of Lemma 11.1,
Auv1A1A1uvA1 1vA1u
for 1vA1u 6 0. Recall the definitions from the hint, gk
x0H0g0 x0H0g0 H1H0g0 x0H0g0
and
Note that d0Qd10, that is, d0 and d1 are Qconjugate.
11.10
0 12
d1H1g113 .
and and
A0Bk, u0gkxk , v0gk, gkgkBkxk
A1BkgkxkA0u0v0 , u1xkBkxk ,
v1xkBk. 69
Using the above notation, we represent Bk1 as
Bk1A0 u0v0 u1v1
Applying to the above Lemma 11.1 gives
HBFGSA10000 k10 1
000 00 0 . 11
A1u1v1. HBFGSB 1
k1 k1
A1u1v11
A1u vA1A1 1 1 1 1 .
1 1vA1u 111
Substituting into the above the expression for A1 yields 1
1 A1u0v0 A11 A1u0v0 A11 1 A0 0 uvA0 0 AuvA 0 1vA1u 11 0 1vA1u
1v0A0 u0
Note that AB . Hence, A1B1H . Using this and the notation introduced at the beginning of
0k0kk the solution, we obtain
HBF GSHHkgkgkHk
k1 H
k gkxkgkHkgk
HkgkgkHk BkxkxkBk
1v A1A0 u0v0A0 u 1 0 1vA1u0 1
00
k
1xkB HHkgkgkHkBkxk
gkxkgkHkgk xkBkxk
k k gkxkgkHkgk xkBkxk
HkHkgkgkHk . gkxkgkHkgk
We next perform some multiplications taking into account that HkB1 and hence k
We obtain
HkBkBkHkIn. HkgkgkHk
k
1 Hkgkgk xkxk1 gkgkHk
xkB xkxk Bgkgkxk . k k gkxkgkHkgk
HBF GSH
k1
gkxkgkHkgk
gkxkgkHkgk gkxkgkHkgk
We proceed with our manipulations. We first perform multiplications by xk and xk to obtain
HBF GSH k1
HkgkgkHk gkxkgkHkgk
gkxkgkHkgk gkxkgkHkgk xkB xkxkB xkxkgkgkxk
k
Hkgkgkxk xkxkxkgkgkHk
.
k k gkxkgkHkgk
70
Cancelling the terms in the denominator of the last term above and performing further multiplications gives
HBFGS H k1
k
HkgkgkHk gkxkgkHkgk
HkgkgkxkxkgkgkHk gkxkgkHkgk xkgk gkxk
xkxk gkxkgkHkgk xkgk gkxk
Hkgk gkxk xkxkgkHk xkgk gkxk .
Further simplification of the third and the fifth terms on the right handside of the above equation gives
HBFGS H k1
k
HkgkgkHk gkxkgkHkgk
HkgkgkHk gkxkgkHkgk
xkxk gkxkgkHkgk xkgk gkxk
HkgkxkxkgkHk . xkgk
Note that the second and the third terms cancel out each other. We then represent the fourth term in alternative manner to obtain
HBF GSH k1 k
xkxk 1gkHkgkxkgk gkxk
HkgkxkxkgkHk , xkgk
which is the desired BFGS update formula.
11.13
The first step for both algorithms is clearly the same, since in either case we have x1x00g0.
For the second step,
In 1 !
1
d H1g
1
!
g0g0 g0x0
x0x0 x0g0
g0x0g0x0 1 !g
g0x0
g11g0g0 x0x0g1
Since the line search is exact, we have
g0x0 x0g0 g0x0g1x0g0g1
g0x0 .
x0g10d0g10. 71
Hence,
where
is the HestenesStiefel update formula for 0. Since d0g0, and g1g00, we have
g1g1g0 0g0g0 ,
which is the PolakRibiere formula. Applying g1g00 again, we get g1g1
0g0g0 ,
a. Suppose the three conditions hold whenever applied to a quadratic. We need to show that when applied
toaquadratic,fork0,,n1andi0,,k,Hk1gi xi. Forik,wehave
Hk1gkHkgkUkgk by condition 1
HkgkxkHkgk by condition 2
xk,
asrequired. Fortherestoftheproofi0,,k1,weuseinductiononk.
For k0, there is nothing to prove covered by the ik case. So suppose the result holds for k1.
Toshowtheresultfork,firstfixi20,,k1. Wehave
Hk1giHkgiUkgi
xiUkgi by the induction hypothesis
xiakxkgibkgkHkgi by condition 3.
So it suces to show that the second and third terms are both 0. For the second term, xkgixkQxi
kidkQdi 0
because of the induction hypothesis, which implies Qconjugacy where Q is the Hessian of the given quadratic. Similarly, for the third term,
gkHkgigkxi by the induction hypothesisxkQxi
kidkQdi0,
1 1 g0g1 ! 0 dg! x
g0x0
1 g1g0 0
gg0d0 dg10d0
0g1g0g1g1g0 d0g0 d0g1g0
which is the FletcherReeves formula.
11.14
72
again because of the induction hypothesis, which implies Qconjugacy. This completes the proof.
b. All three algorithms satisfy the conditions in part a. Condition 1 holds, as described in class. Condition 2 is straightforward to check for all three algorithms. For the rankone and DFP algorithms, this is shown in the book. For BFGS, some simple matrix algebra establishes that it holds. Condition 3 holds by appropriate definition of the vectors ak and bk. In particular, for the rankone algorithm,
akxkHkgk , xkHkgkgk
For the DFP algorithm,
k xk
axkgk ,
Finally, for the BFGS algorithm,
k gkHkgk ! xk Hkgk k xk
bk
xkHkgk . xkHkgkgk
k b
Hkgk
gkHkgk .
a1gkxk xkgkgkxk , bgkxk .
11.15
a. Suppose we apply the algorithm to a quadratic. Then, by the quasiNewton property of DFP, we have
HDFPgi xi,0ik. ThesameholdsforBFGS.Thus,forthegivenH ,wehavefor0ik, k1 k
H
giHDFP gi1HBFGSgi k1 k1 k1
xi1xixi ,
which shows that the above algorithm is a quasiNewton algorithm and hence also a conjugate direction algorithm.
b. By Theorem 11.4 and the discussion on BFGS, we have HDFP0 and HBFGS0. Hence, for any
x 6 0,
sinceand 1 are nonnegative. Hence, Hk0, from which we conclude that the algorithm has the
descent property if k is computed by line search by Proposition 11.1.
11.16
To show the result, we will prove the following precise statement: In the quadratic case with Hessian Q, suppose that Hk1giixi, 0ik, kn1. If i 6 0, 0ik, then d0,,dk1 are Qconjugate.
We proceed by induction. We begin with the k0 case: that d0 and d1 are Qconjugate. Because 0 6 0, we can write d0x00. Hence,
d1Qd0g1H1Qd0
0
g1 0x0
00g1d0.
But g1d00 as a consequence of 00 being the minimizer of fx0d0. Hence, d1Qd00.
xH xxHDFPx1xHBFGSx0 kkk
kk
g1H1
g1 H1g0
Qx0 0
73
Assume that the result is true for k1 where kn1. We now prove the result for k, that is, that d0,,dk1 are Qconjugate. It suces to show that dk1Qdi0, 0ik. Given i, 0ik, using the same algebraic steps as in the k0 case, and using the assumption that i 6 0, we obtain
dk1Qdigk1Hk1Qdi .
igk1di.
Because d0, . . . , dk are Qconjugate by assumption, we conclude from the expanding subspace lemma
Lemma 10.2 that gk1di0. Hence, dk1Qdi0, which completes the proof. 11.17
A MATLAB routine for the quasiNewton algorithm with options for dierent formulas of Hk is:
function x,Nquasinewtongrad,xnew,H,options;
QUASINEWTONgrad,x0,H0;
QUASINEWTONgrad,x0,H0,OPTIONS;
xQUASINEWTONgrad,x0,H0;
xQUASINEWTONgrad,x0,H0,OPTIONS;
x,NQUASINEWTONgrad,x0,H0;
x,NQUASINEWTONgrad,x0,H0,OPTIONS;
The first variant finds the minimizer of a function whose gradient
is described in grad usually an Mfile: grad.m, using initial point
x0 and initial inverse Hessian approximation H0.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results, default is no display: 0.
OPTIONS2 is a measure of the precision required for the final point.
OPTIONS3 is a measure of the precision required of the gradient.
OPTIONS5 specifies the formula for the inverse Hessian update:
0Rank One;
1DFP;
2BFGS;
OPTIONS14 is the maximum number of iterations.
For more information type HELP FOPTIONS.
The next two variants return the value of the final point.
The last two variants return a vector of the final point and the
number of iterations.
if nargin4
options;
if nargin3
dispWrong number of arguments.;
return; end
end
numvarslengthxnew;
if lengthoptions14
if options140
options141000numvars;
end else
74
options141000numvars;
end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
epsilonxoptions2;
epsilongoptions3;
maxiteroptions14;
resetcnt0;
gcurrfevalgrad,xnew;
if normgcurrepsilong
dispTerminating: Norm of initial gradient less than;
dispepsilong;
return;
end if
dHgcurr;
for k1:maxiter,
xcurrxnew;
alphalinesearchsecantgrad,xcurr,d;
xnewxcurralphad;
if print,
dispIteration number k
dispk;print iteration index k
dispalpha ;
dispalpha;print alpha
dispGradient;
dispgcurr; print gradient
dispNew point ;
dispxnew; print new point
end if
if normxnewxcurrepsilonxnormxcurr
dispTerminating: Norm of difference between iterates less than;
dispepsilonx;
break;
end if
goldgcurr;
gcurrfevalgrad,xnew;
if normgcurrepsilong
dispTerminating: Norm of gradient less than;
dispepsilong;
break;
end if
palphad;
qgcurrgold;
resetcntresetcnt1;
if resetcnt3numvars
75
dgcurr;
resetcnt0;
else
if options50 Rank One
qpHq
HHpHqpHqqpHq;
elseif options51 DFP
HHpppqHqHqqHq;
else BFGS
HH1qHqqppppqHqpHqpqp;
end if
dHgcurr;
end
if print,
dispNew H ;
dispH;
dispNew d ;
dispd;
end
if kmaxiter
dispTerminating with maximum number of iterations;
end if
end for
if nargout1
xxnew;
if nargout2
Nk;
end else
dispFinal point ;
dispxnew;
dispNumber of iterations ;
dispk;
end if
We created the following Mfile, g.m, for the gradient of Rosenbrocks function: function ygx
y400x2x1.2x121x1, 200x2x1.2;
We tested the above routine as follows:
options2107;
options3107;
options14100;
x02;2;
H0eye2;
options50;
quasinewtong,x0,H0,options;
Terminating: Norm of difference between iterates less than
1.0000e07
Final point
1.0000e00 1.0000e00
Number of iterations
8
76
options51;
quasinewtong,x0,H0,options;
Terminating: Norm of difference between iterates less than
1.0000e07
Final point
1.0000e00 1.0000e00
Number of iterations
8
options52;
quasinewtong,x0,H0,options;
Terminating: Norm of difference between iterates less than
1.0000e07
Final point
1.0000e00 1.0000e00
Number of iterations
8
The reader is again cautioned not to draw any conclusions about the superiority or inferiority of any of the formulas for Hk based only on the above single numerical experiment.
11.18
a. The plot of the level sets of f were obtained using the following MATLAB commands:
X,Ymeshdom2:0.1:2, 1:0.1:3;
ZX.44Y.22 X.Y XY;
V0.72, 0.6, 0.2, 0.5, 2;
contourX,Y,Z,V
The plot is depicted below:
3 2.5 2 1.5 1 0.5 0 0.5
1
2 1.5 1 0.5 0 0.5 1 1.5 2
x1
b. With the initial condition 0,0, the algorithm converges to 1,0, while with the initial condition 1.5, 1, the algorithm converges to 1, 2. These two points are the two strict local minimizers of f as can be checked using the SOSC. The algorithm apparently converges to the minimizer closer to the initial point.
12. Solving Axb
77
x2
12.1
Write the least squares cost in the usual notation kAxbk2 where
26 3 37 h i 26 1 37
A4565, x m , b4235. The least squares estimate of the mass is
mAA1Ab31. 70
12.2
Write the least squares cost in the usual notation kAxbk2 where
26 1 1 37a26 3 37 A41 25, x b , b445.
145 The least squares estimate for a, b is
ab
AA1Ab3 7112
721 31 121 7112
1473 31 1 35
14 952.
914
2 12 23 A6422275 ,
12.3
a. We form
2 5.003 b6419.575 .
44.0 gAA1Ab9.776.
b. We start with P 00.040816, and x09.776. We have a14228, and b178.5. Using the RLS formula, we get x19.802, which is our updated estimate of g.
12.4
Let xx1, x2, . . . , xn and yy1, y2, . . . , yn. This leastsquares estimation problem can be expressed
32 2 The least squares estimate of g is then given by
as
withas the decision variable. Assuming that x 6 0, the solutPion is unique and is given by
minimize kxyk2,
xy n xiyi1 i1
x x x yP . xx ni1 x2i
78
12.5
The least squares estimate of R is the least squares solution to 1RV1
Therefore, the least squares solution is
12.6
.
1RVn.
0 21311 2V13 V V R B1,,164.75CA 1,,164 . 75 1 n.
1Vn n
We represent the data in the table and the decision variables a and b using the usual least squares matrix
notation:
26 1 2 37 26 6 37aA41 15, b445, x b .
explicitly compute:
12.8
Ratio0.540.32 0.320.34
0.2211. 0.02
325
xaAA1Ab11 91 2519 92512.
The least squares estimate is given by
b 9 9 26 18 9 11 26 6118
12.7
The problem can be formulated as a leastsquares problem with
26 0 . 3 0 . 1 37 26 5 37 A40.4 0.25, b435,
0.3 0.7 4
where the decision variable is xx1,x2, and x1 and x2 are the amounts of A and B, respectively. After
some algebra, we obtain the solution:
xAA1Ab10.54 0.32 3.9 .
0.340.540.322 0.32 0.34 3.9
Since we are only interest in the ratio of the first component of x to the second component, we need only
For each k, we can write
ykayk1bukvk
a2yk2abuk1avk1bukvk .
ak1bu1 ak2bu2 buk ak1v1 ak2v2 vk 79
Writeuu1,,un,vv1,,vn,andyy1,,yn. Then,yCuDv,where 26 b 0037 26 1 0037
C6 ab b .7, D6 a 1 .7. 64 . . . . . . . . . 0 75 64 . . . . . . 0 75
an1bab b an1 an21
Write bD1y and AD1C so that bAuv. Therefore, the linear leastsquares estimate of u
given y is
But CbD. Hence,
u1D1y1 6a 1 .7y. b b 64 . . . . . . . . . 0 75
0a 1
Notice that D1 has the simple form shown above.
An alternative solution is first to define zz1,,zn by zkyk ayk1. Then, we have zbuv.
Therefore, the linear leastsquares estimate of u given y or, equivalently, z is 261 0037
uAA1AbCDD1C1CDD1y. 261 0037
12.9
Define
u1z1 6a 1 .7y. b b 64 . . . . . . . . . 0 75
0a 1
26 x 1 1 37 26 y 1 37 X64 . . . . . . 75 , y64 . . . 75 .
xp1 yp
Since the xi are not all equal, we have rank X2. The objective function can be written as
2 fa,bX ab y .
Therefore, by Theorem 12.1 there exists a unique minimizer a,b given by
ab
4X2X2 5.
X2Y XXYX2X2
XX1Xy
Ppi1 x2i Ppi1 xi1 Ppi1 xiyi
Ppi1 xi p Ppi1 yi X2 X1XY
X1Y
1 1 XXY
X2X2 X X2 Y 2 XY XY3
80
As we can see, the solution does not depend on Y 2. 12.10
a. Wewishtofind!andsuchthat
sin!t1
sin!tp Taking arcsin, we get the following system of linear equations:
!t1arcsin y1 .
!tparcsinyp.
b. We may write the system of linear equations in part a as Axb, where
26t1 137 A64 . . . . . . 75 ,
tp 1
x
!
,
26arcsin y137 b64 . . . 75 .
arcsinyp
Since the ti are not all equal, the first column of A is not a scalar multiple of the second column. Therefore, rank A2. Hence, the least squares solution is
x
AA1Ab
Ppi1 t2i Ppi1 ti1 Ppi1 ti arcsin yi Ppi1 ti p Ppi1 arcsin yi
T2 T1TY T1Y
1 1 TTY
y1 .
yp.
T2 T2
1
T T2 Y
TY TY .
TTY T2Y
The given line can be expressed as the range of the matrix A1, m. Let bx0, y0 be the given point.
T2T2
Therefore, the problem is a linear least squares problem of minimizing kAxbk2. The solution is given by
xAA1Abx0my0 . 1m2
Therefore, the point on the straight line that is closest to the given point x0,y0 is given by x,mx. 12.12
12.11
a. Write
xp 1
The objective function can then be written as kAzbk2.
261 137
A64 . .752Rpn1,
z ac 2Rn1,
26y137 b64 .752Rp.
yp
81
b. Let Xx1,,xp 2 Rpn, and e1,,1 2 Rp. Then we may write AX e. The solution to the problem is AA1Ab. But
AAXX XeXX 0 eX p 0 p
since Xex1xp0 by assumption. Also, AyXy 0
ey ey
since Xyy1x1ypxp0 by assumption. Therefore, the solution is given by
z AA1AbXX1 000 . 0 1p ey 1ey
The ane function of best fit is the constant function fxc, were 1 Xp
12.13
a. Using the least squares formula, we have
cp yi. i1
p
0 2 u1 311 2 y1 3 Pn
un yn
u y
Bu ,,u 6475CA u ,,u 6475 P k k.
k1 n1n 1n nk1u2k
b. Givenuk 1forallk,wehave
1Xn1Xn 1Xn
nn ykn ekn ek. k1 k1 k1
Hence,n !ifandonlyiflimn!1 1 Pn ek 0. n k1
12.14
Weposetheproblemasaleastsquaresproblem: minimizekAxbk2 wherexa,b,and
26 x 0 1 37 26 x 1 37 A41 15, b425.
We have
Therefore, the least squares solution is
x21 x3
P2i0 x2i P2i0 xiP2i0 xixi1
AAP2x 3 , AbP2x . i0 i i0 i1
a P2i0 x2i
P2i0 xi1 P2i0 xixi1 5 31 18 72 3 P2 x3 3 1116 .
bP2 x i0 i
i0 i1
82
12.15
Weposetheproblemasaleastsquaresproblem: minimizekAxbk2 wherexa,b,and
note that h00. We have AA
squares solution is
a Pn1 h2 01 Pn1 h hPn1 h h Pn1 h2
260 137 A6 h1 07,
26h137 b6h27
where we use s00. We have
i1 i i1 ii1 i i1
64 . . . . . . 75
64 . . . 75 hn
0
Pn1 h2 0 Pn1 h h
hn1 i1 i
Ab i1 i i1 . 01 h1
,
The matrix AA is nonsingular because we assume that at least one hk is nonzero. Therefore, the least
i1 i i1 i i1i1 i i1 i1 i .
b 0 1 h1 Weposetheproblemasaleastsquaresproblem: minimizekAxbk2 wherexa,b,and
12.16
Pn1 s2
AAPn1s n,AbPns.
Pn1 s
i1 i i1 i
The matrix AA is nonsingular because we assume that at least one sk is nonzero. Therefore, the least squares solution is
12.17
This leastsquares estimation problem can be expressed as minimize kaxyk2.
If x0, then the problem has an infinite number of solutions: any a solves the problem. Assuming that x 6 0, the solution is unique and is given by
axx1xyxy . xx
a Pn1 s2 i1 i
Pn1 s 1 Pn1 s si1 i i1 i i1
bPn1si i1
n Pn si
i1
n Pn1 sisi1Pn1 si Pn
PPPP.
1
si P Pn1 i1 n1 i1 n1 i1 n
h1
260 137 26s137 A6 s1 17, b6s27
64 . . . . . . 75
sn1 1 sn
64 . . . 75
Pn1 s s
n n1s2 n1s 2i1 si i1 sisi1 i1 s2i i1si i1 i i1 i
83
12.18
The solution to this problem is the same as the solution to:
minimize 1 kxbk2 2
subject to x 2 RA.
Substituting xAy, we see that this is simply a linear least squares problem with decision variable y. The solution to the least squares problem is yAA1Ab, which implies that the solution to the given problem is xAAA1Ab.
12.19
We solve the problem using two dierent methods. The first method would be to use the Lagrange multiplier technique to solve the equivalent problem,
minimize kxx0k2
subjectto hxh1 1 1ix10,
The lagrangian for the above problem has the form,
lx,x21 x2 32 x23 x1 x2 x3 1.
Applying the FONC gives
26 2 x 1 37 rxl42x2 65 and 23
Solving the above yields
x 643 5 75 .
x1 x2 x3 10. 243
3 4
3
The second approach is to use the wellknown solution to the minimum norm problem. We first derive a general solution formula for the problem,
minimize kxx0k subject to Axb,
where A 2 Rmn, mn, and rankAm. To proceed, we first transform the above problem from the x coordinates into the zxx0 coordinates to obtain,
minimize kzk
subject to AzbAx0.
The solution to the above problem has the form,
zA AA1 bAx0
A AA1 bA AA1 Ax0. Therefore, the solution to the original problem is
A AA1 bAx0x0 84
x
A AA1 bA AA1 Ax0x0
A AA1 bInA AA1 A x0.
We substitute into the above formula the given numerical data to obtain
12.20
213 22 1 13203 6376333767
x41541 2 15435 3333
1 1 1 2 0 233333
The solution is therefore
xBB1BcpAA1Ab1bpp
Alternatively: Write
4
643575. 3
4 3
For each x 2 Rn, let yxx0. Then, the original problem is equivalent to minimize kyk
subject to AybAx0,
in the sense that y is a solution to the above problem if and only if xy x0 is a solution to the original
problem. By Theorem 12.2, the above problem has a unique solution given by
yAAA1bAx0AAA1bAAA1Ax0.
Therefore, the solution to the original problem is
xAAA1bAAA1Ax0x0AAA1bInAAA1Ax0.
Note that
kxx0ky
kAAA1bAx0k
kAAA1bAAA1Ax0k. The objective function of the given problem can be written as
12.21
where
fxkBxck2,
26 A 37 26 b 1 37 B64 . . . 75 , c64 . . . 75 .
A bp
kAxbik2xAAx2xAbikbik2 Therefore, the given objective function can be written as
pxAAx2xAb1 bpkb1k2 kbik2. The solution is therefore 1 Xp
x pAA1Ab1 bp p xi i1
1 Xp i1
1 Xp AA1Abip xi
i1
85
Note that the original problem can be written as the least squares problem minimize kAxbk2,
where
12.22
Write
bb1bp . p
kAxbik2xAAx2xAbikbik2 Therefore, the given objective function can be written as
1 pxAAx2xA1b1 pbp1kb1k2 pkbik2. The solution is therefore by inspection
1 Xp Xp x 1 pAA1A1b1 pbp ixi
ixi,
1 p i1 i1 Note that the original problem can be written as the least squares problem
where ii1p. where
12.23
minimize kAxbk2, b 1b1 pbp.
1 p
Let xAAA1b. Suppose y is a point in RA that satisfies Ayb. Then, there exists z 2 Rm such that yAz. Then, subtracting the equation AAAA1bb from the equation
AAzb, we get
AAzAA1b0.
Since rank Am, AA is nonsingular. Therefore, zAA1b0, which implies that
yAzAAA1bx.
Hence, xAAA1b is the only vector in RA that satisfies Axb.
12.24
a. We have Similarly,
b. Now,
Hence,
x0AA 1Ab0G1Ab0. 00000
x1AA 1Ab1G1Ab1. 11111
G0hA a1i A1 1 a1
A 1 A 1a 1 a 1G1a1a1.
G 1G 0a 1 a 1 . 86
c. Using the ShermanMorrison formula,
PG1
d. We have
e. Finally,
11
G0a1a11
G1a aG1 G10 110
0 1a G1a 101
P0P0a1a1P0. 1a 1 P 0 a 1
Ab0G G1Ab0 0000
G0 x0
G1a1a1x0
G1x0a1a1 x0.
G1 A b1 11
x1
G1Ab1abab
x0P1a1b1a1x0. The general RLS algorithm for removals of rows is:
Pk1PkPkak1ak1Pk 1ak1P kak1
11 1111G1 Ab0ab
1011
G1Gx0aax0ab 111111
x0G1a bax0 1111
xk1xkP k1ak1 bk1ak1xk . Using the notation of the proof of Theorem 12.3, we can write
12.25
xk1xkbRk1aRk1xk
aRk1 . kaRk1k2
Hence,
X
k1 2
xk
which means that xk is in spana1, . . . , amRA.
12.26
a. We claim that x minimizes kxx0k subject to x : Axb if and only if yxx0 minimizes kyk subject to AybAx0.
To prove suciency, suppose y minimizes kyk subject to AybAx0. Let xyx0. Consider any point x1 2 x : Axb. Now,
Ax1x0bAx0. 87
kaRk1k2 bRi1aRi1xi
aRi1
i0
Hence, by definition of y,
kx1x0kkykkxx0k.
Therefore x minimizes kxx0k subject to x : Axb.
To prove necessity, suppose x minimizes kxx0k subject to x : Axb. Let yxx0.
Consider any point y1 2 y : AybAx0. Now,
Ay1x0b.
Hence, by definition of x,
ky1kky1x0x0kkxx0kkyk.
Therefore, y minimizes kyk subject to AybAx0.
By Theorem 12.2, there exists a unique vector y minimizing kyk subject to AybAx0. Hence,
by the above claim, there exists a unique x minimizing kxx0k subject to x : Axb b. Using the notation of the proof of Theorem 12.3, Kaczmarzs algorithm is given by
xk1xkbRk1aRk1xkaRk1. Subtract x0 from each side to give
xk1x0xkx0bRk1aRk1x0aRk1xkx0aRk1. Writing ykxkx0, we get
yk1ykbRk1aRk1x0aRk1ykaRk1.
Note that y00. By Theorem 12.3, the sequence yk converges to the unique point y that minimizes kyk subject to AybAx0. Hence xk converges to yx0. From the proof of part a, xyx0 minimizes kxx0k subject to x : Axb. This completes the proof.
12.27
Following the proof of Theorem 12.3, assuming kak1 without loss of generality, we arrive at kxk1xk2kxkxk22axkx2.
Since xk, x 2 RARa by Exercise 12.25, we have xkx 2 RA. Hence, by the Cauchy
Schwarz inequality,
axkx2kak2kxkxk2kxkxk2, since kak1 by assumption. Thus, we obtain
kxk1xk212kxkxk22kxkxk2 wherep12. Itiseasytocheckthat0121forall20,2. Hence,01.
12.28
In Kaczmarzs algorithm with 1, we may write
xk1xkbRk1aRk1xk aRk1 .
kaRk1k2 Subtracting x and premultiplying both sides by aRk1 yields
aRk1 xkxbRk1aRk1xk
aRk1kaRk1k2
aRk1xk1x
aRk1xkaRk1xbRk1aRk1xk
bRk1aRk1x
0.
88
Substituting aRk1xbRk1 yields the desired result.
12.29
We will prove this by contradiction. Suppose Cx is not the minimizer of kBybk2 over Rr. Let y be the minimizer of kBybk2 over Rr. Then, kBybk2kBCxbk2kAxbk2. Since C is of full rank, there exists x 2 Rn such that yCx. Therefore,
kAxbk2 kBCxbk2 kBybk2 kAx bk2 which contradicts the assumption that x is a minimizer of kAxbk2 over Rn.
12.30
a. Let ABC be a full rank factorization of A. Now, we have ACB, where BBB1B and CCCC1. On the other hand ACB. Since ACB is a full rank factorization of A, we have ACBBC. Therefore, to show that AA, it is enough to show that
BB CC.
To this end, note that BBBB1, and CCC1C. On the other hand, BBB1BBBB1, and CCCC1CC1C, which completes the proof.
b. Note that ACB, which is a full rank factorization of A. Therefore, ABC. Hence, to show that AA, it is enough to show that
BB CC.
To this end, note that BBB1BB since B is a full rank matrix. Similarly, CCCC1C since C is a full rank matrix. This completes the proof.
12.31
: We prove properties 14 in turn.
1. This is immediate.
2. Let ABC be a full rank factorization of A. We have ACB, where BBB1B and
CCCC1. Note that BBI and CCI. Now,
3. We have
AAACBBCCBCB
A. AABCCB
BBBB
BB1BB
BBB1BBB
BCCBAA.
89
4. We have
which is a full rank factorization. Therefore,
But
260 0 037 A1A240 12 125 .
000
260 0 037 A2A140 0 15.
000
AACBBCCC
CC
CCCC1CCC1C
CC
CBBC
AA.
: By property 1, we immediately have AAAA. Therefore, it remains to show that there exist matricesU andV suchthatA UA andA AV.
For this, we note from property 2 that AAAA. But from property 3, AAAAAA. Hence, AAAA. Setting UAA, we get that AUA.
Similarly, we note from property 4 that AAAATAA. Substituting this back into property 2 yields AAAAAAA. Setting VAA yields AAV . This completes the proof.
12.32
Taken from 23, p. 24 Let
We compute
We have
26 0 0 0 37 26 1 0 0 37 A140 1 15, A240 1 05.
010 000
26 0 0 0 37 A140 0 15,
26 1 0 0 37 A240 1 05A2.
0 1 1
A1A240 1 054150 1 0
0 0 0 260 0 037 26037h i
010 1
Hence, A1A2 6 A2A1.
13. Unconstrained Optimization and Feedforward Neural Networks
13.1
a. The gradient of f is given by
r fwX dy dX d w. 90
b. The Conjugate Gradient algorithm applied to our training problem is: 1. Set k : 0; select the initial point w0.
2. g0XdydXd w0. If g00, stop, else set d0g0. 3.dkgk
k dkXdXd dk
4. wk1wkkdk
5. gk1XdydXd wk1. If gk10, stop. 6. gk1XdXd dk
k dkXdXd dk
7. dk1gk1kdk
8. Setk:k1;goto3. c. We form the matrix Xd as
Xd 0.5 0.5 0.5 0 0 0 0.5 0.5 0.5 0.5 0 0.5 0.5 0 0.5 0.5 0 0.5
and the vector yd as
yd 0.42074,0.47943,0.42074,0,0,0,0.42074,0.47943,0.42074.
Running the Conjugate Gradient algorithm, we get a solution of w0.8806, 0.000. d. The level sets are shown in the figure below.
0.5 0.4 0.3 0.2 0.1
0 0.1 0.2 0.3 0.4
0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
w1
The solution in part c agrees with the level sets. e. The plot of the error function is depicted below.
91
w2
0.5 0.4 0.3 0.2 0.1
0 0.1 0.2 0.3 0.4 0.5
1
0
x2
11 0.5 0 x1
0.5
1
13.2
a. The expression we seek is To derive the above, we write
ek1 1ek.
yd xd wk1 yd xd wk
ek1 ek
Substituting for wk1wk from the WidrowHo algorithm yields
xd wk1 wk.
e e xekxd e.
k1 k dxx k dd
Hence, ek11ek.
b. Forek !0,itisnecessaryandsucientthat11,whichisequivalentto02.
13.3
a. The error satisfies
To derive the above expression, we write
ek1ek
Substituting for wk1wk from the algorithm yields
ek1ekXd XdXd Xd1ekek.
Hence, ek1Ipek.
b. From part a, we see that ekIp ke0. Hence, by Lemma 5.1, a necessary and sucient condition for ek ! 0 for any e0 is that all the eigenvalues of Ip must be located in the open unit circle. From Exercise 3.6, it follows that the above condition holds if and only if 1i1 for each eigenvalue i of . This is true if and only if 0i2 for each eigenvalue i of .
13.4
We modified the MATLAB routine of Exercise 8.25, by fixing the step size at a value 100. We need the following Mfile for the gradient:
ek1Ipek.
ydXd wk1ydXd wk
Xd wk1 wk.
92
error
function yDfbpw;
wh11w1;
wh21w2;
wh12w3;
wh22w4;
wo11w5;
wo12w6;
xd10; xd21; yd1;
v1wh11xd1wh12xd2;
v2wh21xd1wh22xd2;
z1sigmoidv1;
z2sigmoidv2;
y1sigmoidwo11z1wo12z2
d1ydy1y11y1;
y1d1wo11z11z1xd1;
y2d1wo12z21z2xd1;
y3d1wo11z11z1xd2;
y4d1wo12z21z2xd2;
y5d1z1;
y6d1z2;
yy;
After 20 iterations of the backpropagation algorithm, we get the following weights: wo202.883
wo203.194 12
wh200.1000 11
wh200.8179 12
wh200.3000 21
wh201.106. 22
The corresponding output of the network is y200.9879. 1
13.5
We used the following MATLAB routine:
function x,Nbackpropgrad,xnew,options;
BACKPROPgrad,x0;
BACKPROPgrad,x0,OPTIONS;
xBACKPROPgrad,x0;
xBACKPROPgrad,x0,OPTIONS;
x,NBACKPROPgrad,x0;
x,NBACKPROPgrad,x0,OPTIONS;
The first variant trains a net whose gradient
is described in grad usually an Mfile: grad.m, using a backprop
algorithm with initial point x0.
The second variant allows a vector of optional parameters to
defined. OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results, default is no display: 0.
11
93
OPTIONS2 is a measure of the precision required for the final point.
OPTIONS3 is a measure of the precision required of the gradient.
OPTIONS14 is the maximum number of iterations.
For more information type HELP FOPTIONS.
The next two variants returns the value of the final point.
The last two variants returns a vector of the final point and the
number of iterations.
if nargin3
options;
if nargin2
dispWrong number of arguments.;
return; end
end
if lengthoptions14
if options140
options141000lengthxnew;
end
else
options141000lengthxnew;
end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
epsilonxoptions2;
epsilongoptions3;
maxiteroptions14;
for k1:maxiter,
xcurrxnew;
gcurrfevalgrad,xcurr;
if normgcurrepsilong
dispTerminating: Norm of gradient less than;
dispepsilong;
kk1;
break;
end if
alpha10.0;
xnewxcurralphagcurr;
if print,
dispIteration number k
dispk;print iteration index k
dispalpha ;
dispalpha;print alpha
dispGradient;
dispgcurr; print gradient
94
dispNew point ;
dispxnew; print new point
end if
if normxnewxcurrepsilonxnormxcurr
dispTerminating: Norm of difference between iterates less than;
dispepsilonx;
break;
end if
if kmaxiter
dispTerminating with maximum number of iterations;
end if
end for
if nargout1
xxnew;
if nargout2
Nk;
end else
dispFinal point ;
dispxnew;
dispNumber of iterations ;
dispk;
end if
To apply the above routine, we need the following Mfile for the gradient.
function ygradw,xd,yd;
wh11w1;
wh21w2;
wh12w3;
wh22w4;
wo11w5;
wo12w6;
t1w7;
t2w8;
t3w9;
xd1xd1; xd2xd2;
v1wh11xd1wh12xd2t1;
v2wh21xd1wh22xd2t2;
z1sigmoidv1;
z2sigmoidv2;
y1sigmoidwo11z1wo12z2t3;
d1ydy1y11y1;
y1d1wo11z11z1xd1;
y2d1wo12z21z2xd1;
y3d1wo11z11z1xd2;
y4d1wo12z21z2xd2;
y5d1z1;
y6d1z2;
y7d1wo11z11z1;
y8d1wo12z21z2;
y9d1;
95
yy;
We applied our MATLAB routine as follows.
options2107;
options3107;
options1410000;
w00.1,0.3,0.3,0.4,0.4,0.6,0.1,0.1,0.1;
wstar,Nbackpropgrad,w0,options
Terminating with maximum number of iterations
wstar
7.7771e00
5.5932e00
8.4027e00
5.6384e00
1.1010e01
1.0918e01
3.2773e00
8.3565e00
5.2606e00 N
10000
As we can see from the above, the results coincide with Example 13.3. The table of the outputs of the trained network corresponding to the training input data is shown in Table 13.2.
14. Global Search Algorithms
14.1
The MATLAB program is as follows.
functionoutputargs nmsimplex inputargs
NelderMead simplex method
Based on the program by the Spring 2007 ECE580 student, Hengzhou Ding
disp We minimize a function using the NelderMead method.
disp There are two initial conditions.
disp You can enter your own starting point.
disp
dispSelect one of the starting points
disp 0.55;0.7 or 0.9;0.5
x0input
disp
clear
close all;
dispSelect one of the starting points, or enter your own point
disp0.55;0.7 or 0.9;0.5
dispCopy one of the above points and paste it at the prompt
x0input
hold on
axis square
Plot the contours of the objective function
X1,X2meshgrid1:0.01:1;
96
YX2X1.412.X1.X2X1X23;
C,hcontourX1,X2,Y,20;
clabelC,h;
Initialize all parameters
lambda0.1;
rho1;
chi2;
gamma12;
sigma12;
e11 0;
e20 1;
x00.55 0.7;
x00.9 0.5;
Plot initial point and initialize the simplex
plotx01,x02,;
x:,30;
x:,1x0lambdae1;
x:,2x0lambdae2;
while 1
Check the size of simplex for stopping criterion
simpsizenormx:,1x:,2normx:,2x:,3normx:,3x:,1;
ifsimpsize1e6
break;
end
lastptx:,3;
Sort the simplex
xsortpointsx,3;
Reflection
centro12x:,1x:,2;
xrcentrorhocentrox:,3;
Accept condition
ifobjfunxrobjfunx:,1objfunxrobjfunx:,2
x:,3xr;
Expand condition
elseifobjfunxrobjfunx:,1
xecentrorhochicentrox:,3;
ifobjfunxeobjfunxr
x:,3xe;
else
x:,3xr;
end
Outside contraction or shrink
elseifobjfunxrobjfunx:,2
objfunxrobjfunx:,3
xccentrogammarhocentrox:,3;
ifobjfunxcobjfunx:,3
x:,3xc;
else
xshrinkx,sigma;
end
Inside contraction or shrink
else
xcccentrogammacentrox:,3;
ifobjfunxccobjfunx:,3
x:,3xcc;
97
else
xshrinkx,sigma;
end
end
Plot the new point and connect
plotlastpt1,x1,3,lastpt2,x2,3,;
end
Output the final simplex minimizer
x:,1
objfun
function yobjfunx
yx1x2412x1x2x1x23;
sortpoints
function ysortpointsx,N
for i1:N1
for j1:Ni
ifobjfunx:,jobjfunx:,j1
tmpx:,j;
x:,jx:,j1;
x:,j1tmp;
end
end
end
yx;
shrink
function yshrinkx,sigma
x:,2x:,1sigmax:,2x:,1;
x:,3x:,1sigmax:,3x:,1;
yx;
When we run the MATLAB code above with initial condition 0.55,0.7, we obtain the following plot: 0.8
3.8287
5.3062
2.3512
3.0899
3.8287
1.6124
0.6 0.4 0.2
0 0.2 0.4 0.6 0.8
0.55 0.6
0.65 0.7
0.75 0.8 0.85 0.9
2.3512
0.87367
1.6124
0.13491
0.60384
2.0814
0.60384
5.7751
2.8201
3.5589
2.0814 3.5589
1.3426
5.0364
4.2976
5.0364
5.7751
5.7751
When we run the MATLAB code above with initial condition 0.9, 0.5, we obtain the following plot: 98
1
0.5
0
0.5
0.9 0.8 0.7 0.6 0.5 0.4
Note that this function has two local minimizers. The algorithm terminates at these two minimizers with the two dierent initial conditions. This behavior depends on the value of lambda, which determines the initial simplex. It is possible to reach both minimizers starting from the same initial point by using dierent values of lambda. In the solution above, the initial simplex is smalllambda is just 0.1.
14.2
A MATLAB routine for a naive random search algorithm is given by the Mfile rsdemo shown below:
function x,Nrandomsearchfuncname,xnew,options;
Naive random search demo
x,Nrandomsearchfuncname,xnew,options;
printoptions1;
alphaoptions18;
if nargin3
options;
if nargin2
dispWrong number of arguments.;
return; end
end
if lengthoptions14
if options140
options141000lengthxnew;
end
else
options141000lengthxnew;
end
if lengthoptions18
options181.0; optional step size
end
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
epsilonxoptions2;
2.8201
1.3426
2.0814
0.13491 0.60384
3.5589
2.8201
3.5589
3.5589
2.0814
3.5589
2.8201
2.8201
2.0814
1.3426
0.60384
0.87367
1.3426
0.13491
0.87367
0.60384
0.13491
1.6124
3.8287
99
epsilongoptions3;
maxiteroptions14;
alpha0options18;
if funcnamefr,
roscnt
elseif funcnamefp,
pkscnt;
end if
if lengthxnew2
plotxnew1,xnew2,o
textxnew1,xnew2,Start Point
xlower2;1;
xupper2;3;
end
f0fevalfuncname,xnew;
xbestcurrxnew;
xbestoldxnew;
fbestfevalfuncname,xnew;
fbest10signfbestfbest;
for k1:maxiter,
xcurrxbestcurr;
fcurrfevalfuncname,xcurr;
alphaalpha0;
xnewxcurralpha2randlengthxcurr,11;
for i1:lengthxnew,
xnewimaxxnewi,xloweri;
xnewiminxnewi,xupperi;
end for
fnewfevalfuncname,xnew;
if fnewfbest,
xbestoldxbestcurr;
xbestcurrxnew;
fbestfnew;
end
if print,
dispIteration number k
dispk;print iteration index k
dispalpha ;
dispalpha;print alpha
dispNew point ;
dispxnew; print new point
dispFunction value ;
dispfnew; print func value at new point
end if
if normxnewxbestoldepsilonxnormxbestold
dispTerminating: Norm of difference between iterates less than;
dispepsilonx;
100
break;
end if
pltptsxbestcurr,xbestold;
if kmaxiter
dispTerminating with maximum number of iterations;
end if
end for
if nargout1
xxnew;
if nargout2
Nk;
end else
dispFinal point ;
dispxbestcurr;
dispNumber of iterations ;
dispk;
end if
A MATLAB routine for a simulated annealing algorithm is given by the Mfile sademo shown below:
function x,Nsimulatedannealingfuncname,xnew,options;
Simulated annealing demo
randomsearchfuncname,xnew,options;
printoptions1;
gammaoptions15;
alphaoptions18;
if nargin3
options;
if nargin2
dispWrong number of arguments.;
return; end
end
if lengthoptions14
if options140
options141000lengthxnew;
end
else
options141000lengthxnew;
end
if lengthoptions15
options155.0;
end
if options150
options155.0;
end
if lengthoptions18
options180.5; optional step size
end
format compact;
101
format short e;
optionsfoptionsoptions;
printoptions1;
epsilonxoptions2;
epsilongoptions3;
maxiteroptions14;
alphaoptions18;
gammaoptions15;
k02;
if funcnamefr,
roscnt
elseif funcnamefp,
pkscnt;
end if
if lengthxnew2
plotxnew1,xnew2,o
textxnew1,xnew2,Start Point
xlower2;1;
xupper2;3;
end
f0fevalfuncname,xnew;
xbestcurrxnew;
xbestoldxnew;
xcurrxnew;
fbestfevalfuncname,xnew;
fbest10signfbestfbest;
for k1:maxiter,
fcurrfevalfuncname,xcurr;
xnewxcurralpha2randlengthxcurr,11;
for i1:lengthxnew,
xnewimaxxnewi,xloweri;
xnewiminxnewi,xupperi;
end for
fnewfevalfuncname,xnew;
if fnewfcurr,
xcurrxnew;
fcurrfnew;
else
cointossrand1;
Tempgammalogkk0;
ProbexpfnewfcurrTemp;
if cointossProb,
xcurrxnew;
fcurrfnew;
end
end
if fnewfbest,
102
xbestoldxbestcurr;
xbestcurrxnew;
fbestfnew;
end
if print,
dispIteration number k
dispk;print iteration index k
dispalpha ;
dispalpha;print alpha
dispNew point ;
dispxnew; print new point
dispFunction value ;
dispfnew; print func value at new point
end if
if normxnewxbestoldepsilonxnormxbestold
dispTerminating: Norm of difference between iterates less
than;
dispepsilonx;
break;
end if
pltptsxbestcurr,xbestold;
if kmaxiter
dispTerminating with maximum number of iterations;
end if
end for
if nargout1
xxnew;
if nargout2
Nk;
end else
dispFinal point ;
dispxbestcurr;
dispObjective function value ;
dispfbest;
dispNumber of iterations ;
dispk;
end if
To use the above routines, we also need the following Mfiles: pltpts.m:
function outpltptsxnew,xcurr
plotxcurr1,xnew1,xcurr2,xnew2,r,xnew1,xnew2,o,Erasemode,
none;
drawnow;Draws current graph now
pause1
out;
fp.m:
function yfpx;
103
y31x1.2.expx1.221.2
10.x15x1.32.5.exp
x1.22.2expx11.22.23;
yy;
pkscnt.m:
echo off
X3:0.2:3;
Y3:0.2:3;
x,ymeshgridX,Y ;
func31x.2.expx.2y1.2
10.x5x.3y.5.expx.2y.2
expx1.2y.23;
funcfunc;
clf
levelsexp5:10;
levels5:0.9:10;
contourX,Y,func,levels,k
xlabelx1
ylabelx2
titleMinimization of Peaks function
drawnow;
hold on
plot0.0303,1.5455,o
text0.0303,1.5455,Solution
To run the naive random search algorithms, we first pick a value of 0.5, which involves setting options180.5. We then use the command rsdemofp,0;2,options. The resulting plot of the algorithm trajectory is given below. As we can see, the algorithm is stuck at a local minimizer. By running the algorithm several times, the reader can verify that this nonconvergent behavior is typical.
3
2
1
0
1
2
Minimization of Peaks function
Solution
Start Point
x 2
3
3 2 1 0 1 2 3
x 1
Next, we try 1.5, which involves setting options181.5. We then use the command rsdemofp,0;2,options again, to obtain the plot shown below. This time, the algorithm reaches the global minimizer.
104
14.3
3
2
1
0
1
2
Minimization of Peaks function
Solution
Start Point
xx 22
3
3 2 1 0 1 2 3
x 1
Finally, we again set 0.5, using options180.5. We then run the simulated annealing code using sademofp,0;2,options. The algorithm can be seen to converge to the global minimizer, as plotted below.
3
2
1
0
1
2
Minimization of Peaks function
Solution
Start Point
3
3 2 1 0 1 2 3
x 1
A MATLAB routine for a particle swarm algorithm is:
A particle swarm optimizer
to find the minimummaximum of the MATLABs peaks function
D of inputs to the function dimension of problem
clear
Parameters
ps10;
D2;
pslb3;
psub3;
vellb1;
velub1;
iterationn50;
range3, 3; 3, 3;Range of the input variables
Plot contours of peaks function
x, y, zpeaks;
pcolorx,y,z; shading interp; hold on;
105
contourx, y, z, 20, r;
meshx,y,z
hold off;
colormapgray;
setgca,Fontsize,14
axis3 3 3 3 9 9
axis square;
xlabelx1,Fontsize,14;
ylabelx2,Fontsize,14;
zlabelfx1,x2,Fontsize,14;
hold on
upperzerositerationn, 1;
averagezerositerationn, 1;
lowerzerositerationn, 1;
initialize population of particles and their velocities at time
zero,
format of pos particle, dimension
construct random population positions bounded by VR
need to bound positions
pspospslbpsubpslb.randps,D;
need to bound velocities between mv,mv
psvelvellbvelubvellb.randps,D;
initial pbest positions
pbestpspos;
returns column of cost values 1 for each particle
f131psposi,12exppsposi,12psposi,212;
f210psposi,15psposi,13psposi,25exppsposi,12psposi,22;
f313exppsposi,112psposi,22;
pbestfitzerosps,1;
for i1:ps
g1i31psposi,12exppsposi,12psposi,212;
g2i10psposi,15psposi,13psposi,25exppsposi,12psposi,22;
g3i13exppsposi,112psposi,22;
pbestfitig1ig2ig3i;
end
pbestfit;
handp3plot3pspos:,1,pspos:,2,pbestfit,k,markersize,15,erase,xor;
initial gbest
gbestval,gbestidxmaxpbestfit;
gbestval,gbestidxminpbestfit; this is to minimize
gbestpsposgbestidx,:;
get new velocities, positions this is the heart of the PSO
algorithm
for k1:iterationn
for count1:ps
psvelcount,:0.729psvelcount,:
prev vel
106
1.494randpbestcount,:psposcount,:
1.494randgbestpsposcount,:;
end
psvel;
update new position
pspospspospsvel;
update pbest
for i1:ps
independent
social
g1i31psposi,12exppsposi,12psposi,212;
g2i10psposi,15psposi,13psposi,25exppsposi,12psposi,22;
g3i13exppsposi,112psposi,22;
pscurrentfitig1ig2ig3i;
if pscurrentfitipbestfiti
pbestfitipscurrentfiti;
pbesti,:psposi,:;
end end
pbestfit;
update gbest
gbestval,gbestidxmaxpbestfit;
gbestpsposgbestidx,:;
Fill objective function vectors
upperkmaxpbestfit;
averagekmeanpbestfit;
lowerkminpbestfit;
sethandp3,xdata,pspos:,1,ydata,pspos:,2,zdata,pscurrentfit;
drawnow
pause
end
gbest
gbestval
figure;
x1:iterationn;
plotx, upper, o, x, average, x, x, lower, ;
hold on;
plotx, upper average lower;
hold off;
legendBest, Average, Poorest;
xlabelIterations; ylabelObjective function value;
When we run the MATLAB code above, we obtain a plot of the initial set of particles, as shown below.
107
5
0 5
Then, after 30 iterations, we obtain:
5
0 5
Finally, after 50 iterations, we obtain:
5
0 5
2
0
2
x2
2 0
2
0
2
x2
2 0
2
x1
fx1,x2 fx1,x2 fx1,x2
2
0
2
x2
2 0
2
x1
108
2
x1
A plot of the objective function values best, average, and poorest is shown below.
9 8 7 6 5 4 3 2 1 0
Best
Average Poorest
1
0 10 20 30 40 50
Iterations
14.4
a. Expanding the right hand side of the second expression gives the desired result. b. Applying the algorithm, we get a binary representation of 11111001011, i.e.,
1995210 29 28 27 26 23 21 20. c. Applying the algorithm, we get a binary representation of 0.1011101, i.e.,
0.72656252123242527.
d. We have 19242120, i.e., the binary representation for 19 is 10011. For the fractional part, we need at least 7 bits to keep at least the same accuracy. We have 0.952122232427 , i.e., the binary representation is 0.1111001. Therefore, the binary representation of 19.95 with at least the same degree of accuracy is 10011.1111001.
14.5
It suces to prove the result for the case where only one symbol is swapped, since the general case is obtained by repeating the argument. We have two scenarios. First, suppose the symbol swapped is at a position corresponding to a dont care symbol in H. Clearly, after the swap, both chromosomes will still be in H. Second, suppose the symbol swapped is at a position corresponding to a fixed symbol in H. Since both chromosomes are in H, their symbols at that position must be identical. Hence, the swap does not change the chromosomes. This completes the proof.
14.6 T
Consider a given chromosome in Mk H. The probability that it is chosen for crossover is qc. If neither of its osprings is in H, then at least one of the crossover points must be between the corresponding first and last fixed symbols of H. The probability of this is 11HL12. To see this, note that the probability that each crossover point is not between the corresponding first and last fixed symbols is 1HL1, and thus the probability that both crossover points are not between the corresponding first and last fixed symbols of H is 1HL12. Hence, the probability that the given chromosome is chosen for crossover and neither of its osprings is in H is bounded above by
qc 11 H2!. L1
109
Objective function value
14.7
As for twopoint crossover, the npoint crossover operation is a composition of n onepoint crossover opera tions i.e., n onepoint crossover operations in succession. The required result for this case is as follows.
Lemma:
Given a chromosome in Mk is in H is bounded above by
T
H, the probability that it is chosen for crossover and neither of its osprings
For the proof, proceed as in the solution of Exercise 14.6, replacing 2 by n. 14.8
function Mroulettewheelfitness;
function Mroulettewheelfitness
fitnessvector of fitness values of chromosomes in population
Mvector of indices indicating which chromosome in the
given population should appear in the mating pool
fitnessfitnessminfitness;to keep the fitness positive
if sumfitness0,
dispPopulation has identical chromosomesSTOP;
break; else
fitnessfitnesssumfitness;
end
cumfitnesscumsumfitness;
for i1:lengthfitness,
tmpfindcumfitnessrand0;
Mitmp1;
end
14.9
parent1, parent2two binary parent chromosomes row vectors
Llengthparent1;
crossoverptceilrandL1;
offspring1parent11:crossoverpt parent2crossoverpt1:L;
offspring2parent21:crossoverpt parent1crossoverpt1:L;
14.10
matingpoolmatrix of 01 elements; each row represents a chromosome
pmprobability of mutation
Nsizematingpool,1;
Lsizematingpool,2;
mutationpointsrandN,Lpm;
newpopulationxormatingpool,mutationpoints;
14.11
A MATLAB routine for a genetic algorithm with binary encoding is: 110
qc11 Hn. L1
2
function winner,bestfitnessgaL,N,fitfunc,options
function winnerGAL,N,fitfunc
Function call: GAL,N,f
Llength of chromosomes
Npopulation size must be an even number
fname of fitness value function
Options:
printoptions1;
selectionoptions5;
maxiteroptions14;
pcoptions18;
pmpc100;
Selection:
options50 for roulette wheel, 1 for tournament
clf;
if nargin4
options;
if nargin3
dispWrong number of arguments.;
return; end
end
if lengthoptions14
if options140
options143N;
end
else
options143N;
end
if lengthoptions18
options180.75; optional crossover rate
end
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
selectionoptions5;
maxiteroptions14;
pcoptions18;
pmpc100;
PrandN,L0.5;
bestvaluesofar0;
Initial evaluation
for i1:N,
fitnessifevalfitfunc,Pi,:;
end
bestvalue,bestmaxfitness;
if bestvaluebestvaluesofar,
bestsofarPbest,:;
bestvaluesofarbestvalue;
111
end
for k1:maxiter,
Selection
fitnessfitnessminfitness;to keep the fitness positive
if sumfitness0,
dispPopulation has identical chromosomesSTOP;
dispNumber of iterations:;
dispk;
for ik:maxiter,
upperiupperi1;
averageiaveragei1;
loweriloweri1;
end
break; else
fitnessfitnesssumfitness;
end
if selection0,
roulettewheel
cumfitnesscumsumfitness;
for i1:N,
tmpfindcumfitnessrand0;
mitmp1;
end
else
tournament
for i1:N,
fighter1ceilrandN;
fighter2ceilrandN;
if fitnessfighter1fitnessfighter2,
mifighter1;
else
mifighter2;
end
end end
MzerosN,L;
for i1:N,
Mi,:Pmi,:;
end
Crossover
MnewM;
for i1:N2
ind1ceilrandN;
ind2ceilrandN;
parent1Mind1,:;
parent2Mind2,:;
if randpc
crossoverptceilrandL1;
offspring1parent11:crossoverpt parent2crossoverpt1:L;
offspring2parent21:crossoverpt parent1crossoverpt1:L;
Mnewind1,:offspring1;
Mnewind2,:offspring2;
end end
112
Mutation
mutationpointsrandN,Lpm;
PxorMnew,mutationpoints;
Evaluation
for i1:N,
fitnessifevalfitfunc,Pi,:;
end
bestvalue,bestmaxfitness;
if bestvaluebestvaluesofar,
bestsofarPbest,:;
bestvaluesofarbestvalue;
end
upperkbestvalue;
averagekmeanfitness;
lowerkminfitness;
end for
if kmaxiter,
dispAlgorithm terminated after maximum number of iterations:;
dispmaxiter;
end
winnerbestsofar;
bestfitnessbestvaluesofar;
if print,
iter1:maxiter;
plotiter,upper,o:,iter,average,x,iter,lower,;
legendBest, Average, Worst;
xlabelGenerations,Fontsize,14;
ylabelObjective Function Value,Fontsize,14;
setgca,Fontsize,14;
hold off;
end
a. To run the routine, we create the following Mfiles.
function decbin2decbin,range;
function decbin2decbin,range;
Function to convert from binary bin to decimal dec in a given range
indexpolyvalbin,2;
decindexrange2range12lengthbin1range1;
function yfmanymaxx;
y15sin2x2x22160;
function yfitfunc1binchrom;
1D fitness function
ffmanymax;
range10,10;
xbin2decbinchrom,range;
yfevalf,x;
We use the following script to run the algorithm: 113
clear;
options11;
x,yga8,10,fitfunc1,options;
ffmanymax;
range10,10;
dispGA Solution:;
dispbin2decx,range;
dispObjective function value:;
dispy;
Running the above algorithm, we obtain a solution of x1.6078, and an objective function value of 159.7640. The figure below shows a plot of the best, average, and worst solution from each generation of the population.
160 150 140 130 120 110 100
90 80 70
60
0 5 10 15 20 25 30
Generations
b. To run the routine, we create the following Mfiles we also use the routine bin2dec from part a. function yfpeaksx;
y31x1.2.expx1.221.2
10.x15x1.32.5.exp
x1.22.2expx11.22.23;
function yfitfunc2binchrom;
2D fitness function
ffpeaks;
xrange3,3;
yrange3,3;
Llengthbinchrom;
x1bin2decbinchrom1:L2,xrange;
x2bin2decbinchromL21:L,yrange;
yfevalf,x1,x2;
We use the following script to run the algorithm:
clear;
options11;
x,yga16,20,fitfunc2,options;
ffpeaks;
xrange3,3;
Best Average Worst
114
Objective Function Value
yrange3,3;
Llengthx;
x1bin2decx1:L2,xrange;
x2bin2decxL21:L,yrange;
dispGA Solution:;
dispx1,x2;
dispObjective function value:;
dispy;
A plot of the objective function is shown below.
160 140 120 100
80 60 40 20
0
10 5 0 5 10
x
Running the above algorithm, we obtain a solution of 0.0353, 1.4941, and an x0.0588, 1.5412, and an objective function value of 7.9815. Compare this solution with that of Example 14.3. The figure below shows a plot of the best, average, and worst solution from each generation of the population.
8 6 4 2 0
2 4 6
Best Average Worst
14.12
0 10 20 30 40 50 60
Generations
A MATLAB routine for a realnumber genetic algorithm:
function winner,bestfitnessgarDomain,N,fitfunc,options
function winnerGARDomain,N,fitfunc
Function call: GARDomain,N,f
Domainsearch space; e.g., 2,2;3,3 for the space 2,23,3
115
Objective Function Value
fx
number of rows of Domaindimension of search space
Npopulation size must be an even number
fname of fitness value function
Options:
printoptions1;
selectionoptions5;
maxiteroptions14;
pcoptions18;
pmpc100;
Selection:
options50 for roulette wheel, 1 for tournament
clf;
if nargin4
options;
if nargin3
dispWrong number of arguments.;
return; end
end
if lengthoptions14
if options140
options143N;
end
else
options143N;
end
if lengthoptions18
options180.75; optional crossover rate
end
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
selectionoptions5;
maxiteroptions14;
pcoptions18;
pmpc100;
nsizeDomain,1;
lowbDomain:,1;
upbDomain:,2;
bestvaluesofar0;
for i1:N,
Pi,:lowbrand1,n.upblowb;
Initial evaluation
fitnessifevalfitfunc,Pi,:;
end
bestvalue,bestmaxfitness;
if bestvaluebestvaluesofar,
bestsofarPbest,:;
bestvaluesofarbestvalue;
end
116
for k1:maxiter,
Selection
fitnessfitnessminfitness;to keep the fitness positive
if sumfitness0,
dispPopulation has identical chromosomesSTOP;
dispNumber of iterations:;
dispk;
for ik:maxiter,
upperiupperi1;
averageiaveragei1;
loweriloweri1;
end
break; else
fitnessfitnesssumfitness;
end
if selection0,
roulettewheel
cumfitnesscumsumfitness;
for i1:N,
tmpfindcumfitnessrand0;
mitmp1;
end
else
tournament
for i1:N,
fighter1ceilrandN;
fighter2ceilrandN;
if fitnessfighter1fitnessfighter2,
mifighter1;
else
mifighter2;
end
end end
MzerosN,n;
for i1:N,
Mi,:Pmi,:;
end
Crossover
MnewM;
for i1:N2
ind1ceilrandN;
ind2ceilrandN;
parent1Mind1,:;
parent2Mind2,:;
if randpc
arand;
offspring1aparent11aparent2rand1,n0.5.upblowb10;
offspring2aparent21aparent1rand1,n0.5.upblowb10;
do projection
for j1:n,
if offspring1jlowbj,
offspring1jlowbj;
elseif offspring1jupbj,
offspring1jupbj;
end
117
if offspring2jlowbj,
offspring2jlowbj;
elseif offspring2jupbj,
offspring2jupbj;
end end
Mnewind1,:offspring1;
Mnewind2,:offspring2;
end
end
Mutation
for i1:N,
if randpm,
arand;
Mnewi,:aMnewi,:1alowbrand1,n.upblowb;
end end
PMnew;
Evaluation
for i1:N,
fitnessifevalfitfunc,Pi,:;
end
bestvalue,bestmaxfitness;
if bestvaluebestvaluesofar,
bestsofarPbest,:;
bestvaluesofarbestvalue;
end
upperkbestvalue;
averagekmeanfitness;
lowerkminfitness;
end for
if kmaxiter,
dispAlgorithm terminated after maximum number of iterations:;
dispmaxiter;
end
winnerbestsofar;
bestfitnessbestvaluesofar;
if print,
iter1:maxiter;
plotiter,upper,o:,iter,average,x,iter,lower,;
legendBest, Average, Worst;
xlabelGenerations,Fontsize,14;
ylabelObjective Function Value,Fontsize,14;
setgca,Fontsize,14;
hold off;
end
To run the routine, we create the following Mfile for the given function.
function yfwavex;
yx1sinx1x2sin5x2;
118
We use the following script to run the algorithm:
options11;
options1450;
x,ygar0,10;4,6,20,fwave,options;
dispGA Solution:;
dispx;
dispObjective function value:;
dispy;
Running the above algorithm, we obtain a solution of x7.9711,5.3462, and an objective function value of 13.2607. The figure below shows a plot of the best, average, and worst solution from each generation of the population.
15
10
5
0
5
Best Average Worst
10
0 10 20 30 40 50
Generations
Using the MATLAB function fminunc from the Optimization Toolbox, we found the optimal point to be 7.9787,5.3482, with objective function value 13.2612. We can see that this solution agrees with the solution obtained using our realnumber genetic algorithm.
119
Objective Function Value
15. Introduction to Linear Programming
15.1
15.2
We have
minimize 21×2 subjecttox1x3 x1x2x4 x12x2x5 x1,,x5
2350
x2 ax1 bu1 a2x0 abu0 bu1 a2 ab,bu
where uu0 , u1is the decision variable. We can write the constraint as ui1 and ui1. Hence, the
problem is:
minimize a2ab, bu subjectto 1ui 1, i1,2.
Since a2 is a constant, we can remove it from the objective function without changing the solution. Intro ducing slack variables v1, v2, v3, v4, we obtain the standard form problem
15.3
minimize subject to
ab, bu u0v11 u0v21 u1v31 u1v41.
Let xi , xi0 be such that xixixi , xixixi . Substituting into the original problem, we have
minimize c1x1 x1 c2x2 x2 cnx2 x2subject to Axxb
x,x 0,
where xx1 ,,xnand xx1 ,,xn . Rewriting, we get
minimize c , c z subject to A,Azb
z0,
which is an equivalent linear programming problem in standard form.
Note that although the variables xi and xi in the solution are required to satisfy xi xi0, we do
not need to explicitly include this in the constraint because any optimal solution to the above transformed problem automatically satisfies the condition xi xi0. To see this, suppose we have an optimal solution with both xi0 and xi0. In this case, note that ci0 for otherwise we can add any arbitrary constant to both xi and xi and still satisfy feasibility, but decrease the objective function value. Then, by subtracting minxi ,xifrom xi and xi , we have a new feasible point with lower objective function value, contradicting the optimality assumption. See also M. A. Dahleh and I. J. DiazBobillo, Control of Uncertain Systems: A Linear Programming Approach, Prentice Hall, 1995, pp. 189190.
120
15.4
Not every linear programming problem in standard form has a nonempty feasible set. Example:
minimize x1 subject to x11
x10.
Not every linear programming problem in standard form even assuming a nonempty feasible set has an
optimal solution. Example:
15.5
minimize x1 subject to x21
x1,x2 0.
Let x1 and x2 represent the number of units to be shipped from A to C and to D, respectively, and x3 and x4 represent the number of units to be shipped from B to C and to D, respectively. Then, the given problem can be formulated as the following linear program:
minimize subject to
Introducing slack variables x5 and x6, we have the standard form problem
15.6
minimize subject to
x1 22 33 44 x1x350
x2x460
x1x2x570
x3x4x680 x1,x2,x3,x4,x5,x6 0.
x1 22 33 44 x1x350
x2x460
x1x270
x3x480 x1,x2,x3,x4 0.
We can see that there are two paths from A to E ACDE and ACBFDE, and two paths from B to F BCDF and BF. Let x1 and x2 be the data rates for the two paths from A to E, respectively, and x3 and x4 the data rates for the two paths from B to F , respectively. The total revenue is then 21x233x4. For each link, we have a data rate constraint on the sum of all xis passing through that link. For example,
121
for link BC, there are two paths passing through it, with total data rate x2x3. Hence, the constraint for link BC is x2x37. Hence, the optimization problem is the following linear programming problem:
Converting this to standard form:
subject to
15.7
maximize subject to
21x233x4 x1x210
x1x212
x1x38
x2x37
x2x34
x2x43 x1,,x40.
2x1x23x3x4 x1x2x510
x1x2x612
x1x3x78
x2x3x87 x2x3x94 x2x4x103 x1,,x100.
minimize
Let xi0, i1,,4, be the weight in pounds of item i to be used. Then, the total weight is x1x2x3x4. To satisfy the percentage content of fiber, fat, and sugar, and the total weight of 1000, we need
31 82 163 44 61 462 93 94 201 52 43 0x4
101 x2 x3 x421 x2 x3 x451 x2 x3 x41000
x1 x2 x3 x4
The total cost is 2142×324. Therefore, the problem is:
minimize 2142×324 subject to 71 22 63 64 41 442 73 74 15x1x35x4 x1 x2 x3 x4 x1,x2,x3,x4
0
0
0
10000
Alternatively, we could have simply replaced x1x2x3x4 in the first three equality constraints above by 1000, to obtain:
31 82 163 4410000 61 462 93 942000 201 52 43 0x45000
x1 x2 x3 x41000.
Note that the only vector satisfying the above linear equations is 179,175,573,422, which is not feasible. Therefore, the constraint does not have any any feasible points, which means that the problem does not have a solution.
122
15.8
The objective function is p1pn. The constraint for the ith location is: gi,1p1gi,npnP . Hence, the the optimization problem is:
minimize p1 pn
subject to gi,1p1gi,npnP, i1, . . . , m
p1,,pn0.
By defining the notation Ggi,j mn, en1,,1 with n components, and pp1,,pn,
we can rewrite the problem as
minimize en p subject to GpPem
p0.
It is easy to check using MATLAB, for example that the matrix
26 21 21 3 37 A41 2 3 1 05
1 0 2 0 5
is of full rank i.e., rank A3. Therefore, the system has basic solutions. To find the basic solutions, we first select bases. Each basis consists of three linearly independent columns of A. These columns correspond to basic variables of the basic solution. The remaining variables are nonbasic and are set to 0. The matrix A has 5 columns; therefore, we have 5310 possible candidate basic solutions corresponding to the 10 combinations of 3 columns out of 5. It turns out that all 10 combinations of 3 columns of A are linearly independent. Therefore, we have 10 basic solutions. These are tabulated as follows:
15.9
15.10
Columns 1,2,3 1,2,4 1,2,5 1,3,4 1,3,5 1,4,5 2,3,4 2,3,5 2,4,5 3,4,5
Basic Solutions 417, 8017, 8317, 0, 0 10, 49, 0, 83, 0 10531, 2531, 0, 0, 8331 1211, 0, 4911, 8011, 0 10035, 0, 2535, 0, 8035 6518, 0, 0, 2518, 4918 0,6,5,2,0
0, 10023, 10523, 0, 423 0, 13, 0, 21, 2
0, 0, 6519, 10019, 1219
In the figure below, the shaded region corresponds to the feasible set. We then translate the line 21 520 across the shaded region until the line just touches the region at one point, and the line is as far as possible from the origin. The point of contact is the solution to the problem. In this case, the solution is 2,6, and the corresponding cost is 34.
123
x2
2 6
0 6
4 4
x1
4 0
15.11
We use the following MATLAB commands:
f0,10,0,6,20;
A1,1,1,0,0; 0,0,1,1,1;
b0;0;
vlbzeros5,1;
vub4;3;3;2;2;
x0zeros5,1;
neqcstr2;
xlinprogf,A,b,vlb,vub,x0,neqcstr
x
4.0000
2.0000
2.0000
0.0000
2.0000
The solution is 4, 2, 2, 0, 2 . 16. The Simplex Method
124
x1x28
2x15x234
2x15x20
16.1
a. Performing a sequence of elementary row operations, we obtain
26 1 21 3 2 37 26 1 21 3 2 37 26 1 21 3 2 37 A62 1 3 0 17!60 5 5 6 37!60 5 5 6 37B. 43 1 2 3 35 40 5 5 6 35 40 0 4 2 15
1 2 3 1 1 0 0 4 2 1 0 0 0 0 0
Because elementary row operations do not change the rank of a matrix, rankArankB. Therefore
rank A3.
b. Performing a sequence of elementary row operations, we obtain
2611 237 261 10 6 137 261 10 6 137 A42155!41125!4010 5 15
1 10 6 1 2 15 0 21 12 3
261110 6372611 10 637 !40 1 10 5 5!40 1 10 5 5B
0 3 21 12 0 0 33 3
Because elementary row operations do not change the rank of a matrix, rankArankB. Therefore
rank A3 if6 3 and rank A2 if 3. 16.2
a.
A3 1 0 1, b4, c2,1,1,0. 6211 5
b. Pivoting the problem tableau about the elements 1, 4 and 2, 3, we obtain
31014 31101 50001
c. Basic feasible solution: x0, 0, 1, 4 , c x1.
d. r1,r2,r3,r45,0,0,0.
e. Since the reduced cost coecients are all0, the basic feasible solution in part c is optimal.
f. The original problem does indeed have a feasible solution, because the artificial problem has an optimal feasible solution with objective function value 0, as shown in the final phase I tableau.
g. Extract the submatrices corresponding to A and b, append the last row c,0, and pivot about the 2, 1th element to obtain
16.3
The problem in standard form is:
0 0 1 1 3
1 13 13 0 13 0 53 53 0 23
minimize subject to
x1x233 x1x31
x2x32 x1,x2,x3 0.
125
We form the tableau for the problem:
Performing necessary row operations, we obtain a tableau in canonical form:
1011 0112 0 0 1 3
0, 1, 1 . The optimal cost is 4. 16.4
The problem in standard form is:
We form the tableau for the problem:
minimize subject to
21×2
x1x35
x2x47
x1x2x59 x1,,x50.
1011
0112 1 1 3 0
We pivot about the 1, 3th element to get:
1 1 0 1
1004
The reduced cost coecients are all nonnegative. Hence, the current basic feasible solution is optimal:
1011
101005 010107 110019
2 1 0 0 0 0
The above tableau is already in canonical form, and therefore we can proceed with the simplex procedure.
We first pivot about the 1, 1th element, to get
101005 010107 0 1 1 0 1 4 01 2 0010
Next, we pivot about the 3, 2th element to get
101005 0 0 1 1 1 3 0 1 1 0 1 4 0 0 1 0 1 14
The reduced cost coecients are all nonnegative. Hence, the optimal solution to the problem in standard form is 5, 4, 0, 3, 0. The corresponding optimal cost is 14.
126
16.5
a. Let Ba2, a1 represent the first two columns of A ordered according to the basis corresponding to the given canonical tableau, and D the second two columns. Then,
Hence, Hence,
B1D1 2, 34
1 2132 12 BD34 2 1 .
A12 32 0 1 1 2 1 0
An alternative approach is to realize that the canonical tableau is obtained from the problem tableau via elementary row operations. Therefore, we can obtain the entries of A from the 24 upperleft submatrix of the canonical tableau via elementary row operations also. Specifically, start with
0 1 1 2 1034
and then do two pivoting operations, one about 1, 4 and the other about 2, 3. b. The righthalf of c is given by
cD rD cBB1D1,17,81 21,131,4630,47. 34
So c8,7,30,47.
c. First we calculate B1b, giving us the basic variable values:
B1b2 1516. 436 38
Hence, the BFS is 38, 16, 0, 0 .
d. The first two entries are 16 and 38, respectively. The last component is cBB1b716838
416. Hence, the last column is the vector 16, 38, 416 .
16.6
The columns in the constraint matrix A corresponding to xi and xi are linearly dependent. Hence they cannot both enter a basis at the same time. This means that only one variable, xi or xi , can assume a nonnegative value; the nonbasic variable is necessarily zero.
16.7
a. From the given information, we have the 46 canonical tableau
26 10 0 12 1 37 601 0 0 27 400 1 0 35 0 1 0 0 1 6
Explanations:
The given vector x indicates that A is 35.
127
In the above tableau, we assume that the basis is a1,a3,a4, in this order. Other permutations of orders will result in interchanging rows among the first three rows of the tableau.
The fifth column represents the coordinates of a5 with respect to the basis a1,a3,a4. Because 2, 0, 0, 0, 4 lies in the nullspace of A, we deduce that 2a14a50, which can be rewritten as a512a10a30a4, and hence the coordinate vector is 12, 0, 0.
b. Let d02, 0, 0, 0, 4. Then, Ad00. Therefore, the vector x0xd0 also satisfies Axb. Now, x0xd012, 0, 2, 3, 4. For x0 to be feasible, we must have 12. Moreover, the objective function value of x0 is cx0z0r5x0564, where z0 is the objective function value of x. So, if we pick any2 0,12, then x0 will be a feasible solution with objective function value strictly less than 6. For example, with 12, x00, 0, 2, 3, 2 is such a point. We could also have obtained this solution by pivoting about the element 1, 5 in the tableau of part a.
16.8
a. The BFS is 6, 0, 7, 5, 0, with objective function value 8.
b. r0,4,0,0,4.
c. Yes, because the 5th column has all negative entries.
d. We pivot about the element 3, 2. The new canonical tableau is:
26 0 013 1 0 83 37 61 0 23 0 0 437 401 13 01 735 00430 0 43
e. First note that based on the 5th column, the following point is feasible: 26 60 37 26 20 37
Note that x5. Now, any solution of the form x, 0, , ,has an objective function value given by
zz0r5
where z08 and r54 from parts a and b. If z100, then 23. Hence, the following point has
objective function value z100:
607 607 6 0 7 x67723 6376767 .
455 415 4285 0 1 23
f. The entries of the 2nd column of the given canonical tableau are the coordinates of a2 with respect to the
basis a4, a1, a3. Therefore,
a2a42a13a3.
x677 637 . 4 50 5 4 1 1 5
26 637 26 237 26 5237
Therefore, the vector 2, 1, 3, 1, 0 lies in the nullspace of A. Similarly, using the entries of the 5th column, we deduce that 2, 0, 3, 1, 1 also lies in the nullspace of A. These two vectors are linearly independent. Because A has rank 3, the dimension of the nullspace of A is 2. Hence, these two vectors form a basis for the nullspace of A.
128
16.9
a. We can convert the problem to standard form by multiplying the objective function by 1 and introducing a surplus variable x3. We obtain:
minimize x122 subject to x2x31
x1,x2,x3 0.
Note that we do not need to deal with the absence of the constraint x20 in the original problem, since x2 1impliesthatx2 0also. Hadweusedtheruleofwritingx2 uvwithu,v0,weobtainthe standard form problem:
minimize x12u2v subjectto uvx3 1 x1,u,v,x30.
b. For phase I, we set up the artificial problem tableau as:
0 1 1 1 1 00010
Pivoting about element 1, 4, we obtain the canonical tableau: 0 1 1 1 1
01 1 01
Pivoting now about element 1, 2, we obtain the next canonical tableau:
0 1 1 1 1 00010
Hence, phase I terminates, and we use x2 as our initial basic variable for phase II. For phase II, we set up the problem tableau as:
0 1 1 1 1200
Pivoting about element 1, 2, we obtain
0 1 1 1
1 0 2 2
Hence, the BFS 0,1,0 is optimal, with objective function value 2. Therefore, the optimal solution to the
original problem is 0,1 with objective function value 2. 16.10
a. 1,0
b.h1 1 1i 1 1 1
Note that the answer is not 0 1 1 , which is the canonical tableau.
c. We choose q2 because the only negative RCC value is r2. However, y1,20. Therefore, the simplex
algorithm terminates with the condition that the problem is unbounded.
d. Anyvectoroftheformx1,x11,x1 1,isfeasible. Thereforethefirstcomponentcantakearbitrarily
large positive values. Hence, the objective function, which is x1, can take arbitrarily negative values. 129
16.11
The problem in standard form is:
We compute
c 1 1 0 0 0 1 1 21
minimize subject to
x1x2
x122x33 21x2x43 x1,x2,x3,x4 0.
We will use x1 and x2 as initial basic variables. Therefore, Phase I is not needed, and we immediately proceed with Phase II. The tableau for the problem is:
a1 a2 a3 a4 b 1 2 1 0 3 2 1 0 1 3
1, 1, 0, 0. Therefore, the solution to the original problem is 1, 1, and the corresponding cost is 2. 16.12
cBB 1,1 2 1 13,13,
rDcD D0,013,131 0 13,13r3,r4.
0 1
The reduced cost coecients are all nonnegative. Hence, the solution to the standard form problem is
a. The problem in standard form is:
minimize subject to
4132
51x2x311 21x2x48 x122x57 x1,,x50.
We do not have an apparent basic feasible solution. Therefore, we will need to use the two phase method. Phase I: We introduce artificial variables x6, x7, x8 and form the following tableau.
a1 a2 a3 a4 a5 a6 a7 a8 b 5 1 1 0 0 1 0 0 11 2 1 0 1 0 0 1 0 8 1 2 0 0 1 0 0 1 7
c 0 0 0 0 0 1 1 1 0 We then form the following revised tableau:
We compute:
Variable B1 y0 x6 1 0 0 11 x7 0108 x8 0017
1,1,1
rDr1,r2,r3,r4,r58,4,1,1,1.
130
We form the augmented revised tableau by introducing y1B1a1a1:
Variable B1 y0 y1 x6 1 0 0 11 5 x7 01082 x8 00171
We now pivot about the first component of y1 to get
We compute
Variable B1 y0 x1 15 0 0 115 x7 25 1 0 185 x8 15 0 1 245
35, 1, 1
r2, r3, r4, r5, r6125, 35, 1, 1, 85.
rD
We bring y2B1a2 into the basis to get
Variable
x1 15 0 0 115 15 x7 25 1 0 185 35 x8 15 0 1 245 95
We pivot about the third component of y2 to get
B1 y0 y2
We compute
Variable B1 y0 x1 29 0 19 53 x7 13 1 13 2 x2 19 0 59 83
13, 1, 13
r3, r4, r5, r6, r813, 1, 13, 43, 43.
We bring y3B1a3 into the basis to get Variable
x1 x7 x2
B1 y0 y3 29 0 19 53 29
13 1 13 2 13 19 0 59 83 19
rD
We pivot about the second component of y3 to obtain
We compute
Variable B1 y0 x1 0 23 13 3 x3 1 3 1 6 x2 0 13 23 2
0,0,0
rDr4,r5,r6,r7,r80,0,1,1,10.
131
Thus, Phase I is complete, and the initial basic feasible solution is 3, 2, 6, 0, 0 . Phase II
We form the tableau for the original problem:
a1 a2 a3 a4 a5 b 5 1 1 0 0 11 2 1 0 1 0 8 1 2 0 0 1 7
c 4 3 0 0 0 0
The initial revised tableau for Phase II is the final revised tableau for Phase I. We compute
0, 53, 23
rDr4, r553, 230.
Hence, the optimal solution to the original problem is 3,2. b. The problem in standard form is:
minimize subject to
61 42 73 54
x1 22 x3 24 x520 61 52 33 24 x6100 31 42 93 124 x775 x1,,x70.
We have an apparent basic feasible solution: 0, 0, 0, 20, 100, 75, corresponding to BI3. We form the revised tableau corresponding to this basic feasible solution:
B1 y0 x5 1 0 0 20
x6 0 1 0 100 x7 0 0 1 75
0,0,0
rDr1, r2, r3, r46, 4, 7, 5.
Variable
We compute
We bring y2B1a3a3 into the basis to obtain
Variable B1 y0 y3 x5 100201 x6 0 1 0 100 3 x7 001759
We pivot about the third component of y3 to get
We compute
Variable
x5 1 0 19 353
x6 0 1 13 75
x3 0 0 19 253
0, 0, 79
r1, r2, r4, r7113, 89, 133, 79.
rD
B1 y0
132
We bring y1B1a1 into the basis to obtain
Variable B1 y0 y1 x5 1 0 19 353 23 x6 0113755 x3 0 0 19 253 13
We pivot about the second component of y1 to obtain
B1 y0
We compute
Variable
x5 1 215 115 53 x1 0 15 115 15 x3 0 115 215 103
0, 1115, 815
r2, r4, r6, r72715, 4315, 1115, 8150.
rD
The optimal solution to the original problem is therefore 15, 0, 103, 0 . 16.13
a. By inspection of r, we conclude that the basic variables are x1,x3,x4, and the basis matrix is
260 0 137 B41 0 05.
010
Since r0, the basic feasible solution corresponding to the basis B is optimal. This optimal basic
feasible solution is 8, 0, 9, 7 .
b. An optimal solution to the dual is given by
where cB6, 4, 5, and
c B B1 ,
1 2601037 B 40 0 15.
100
c. We have rDcDD, where rD1, cDc2, 5,6,4, and D2,1,3. We get
We obtain 5, 6, 4. 1c210612,whichyieldsc2 29.
16.14
a. There are two basic feasible solutions: 1, 0 and 0, 2 .
b. The feasible set in R2 for this problem is the line segment joining the two basic feasible solutions 1, 0 and 0,2. Therefore, if the problem has an optimal feasible solution that is not basic, then all points in the feasible set are optimal. For this, we need
c12, c2 1
where2 R.
c. Since all basic feasible solutions are optimal, the relative cost coecients are all zero.
133
16.15
a. 20, 0, 0, andanything. b. 20, 7,andanything.
c. 20, 0, either 0 or 54, andanything.
16.16
a. The value ofmust be 0, because the objective function value is 0 lower right corner, andis the value of an artificial variable.
The value ofmust be 0, because it is the RCC value corresponding to a basic column.
The value ofmust be 2, because it must be a positive value. Otherwise, there is a feasible solution to the artificial problem with objective function value smaller than 0, which is impossible.
The value ofmust be 0, because we must be able to bring the fourth column into the basis without changing the objective function value.
b. The given linear programming problem does indeed have a feasible solution: 0, 5, 6, 0. We obtain this by noticing that the rightmost column is a linear combination of the second and third columns, with coecients 5 and 6.
16.17
First, we convert the inequality constraint Axb into standard form. To do this, we introduce a variable w 2 Rm of surplus variables to convert the inequality constraint into the following equivalent constraint:
A,Iwxb, w0.
Next, we introduce variables u, v 2 Rn to replace the free variable x by uv. We then obtain the following
equivalent constraint:
This form of the constraint is now in standard form. So we can now use Phase I of the simplex method to implement an algorithm to find a vectors u, v, and w satisfying the above constraint, if such exist, or to declare that none exists. If such exist, we output xuv; otherwise, we declare that no x exists such that Axb. By construction, this algorithm is guaranteed to behave in the way specified by the question.
16.18
a. We form the tableau for the problem:
1 0 0 14 8 1 9 0 010 12 121230 0010 0 101 00034 20 1260
The above tableau is already in canonical form, and therefore we can proceed with the simplex procedure. We first pivot about the 1, 4th element, to get
4 0 0 1 32 4 36 0 2 1 0 0 4 32 15 0 00100101 3 0 0 0 4 72 33 0
Pivoting about the 2, 5th element, we get
12 8 0 1 0 8 84 0 12 14 0 0 1 38 154 0 00100101 1 1 0 0 0 2 18 0
26 u 37
A,A,I4wv5b, u,v,w0.
134
Pivoting about the 1, 6th element, we get
32 1 0 18 0 1 116 18 0 364 1 0 32 1 1 18 0 0 2 3 0 14 0 0
Pivoting about the 2, 7th element, we get
2 6 0 52 56 13 23 0 14 163 2 6 1 52 56
1 1 0 12 16 Pivoting about the 1, 1th element, we get
212 0 316 0 212 1
3 0
100 0 1 0 0 0 1 000
1 3 0 54 28 12 0 0 0130 16 41610 00100101 0 2 0 74 44 12 0 0
Pivoting about the 2, 2th element, we get
1 0 0 14 8 1 9 0 010 12 121230 0010 0 101 00034 20 1260
which is identical to the initial tableau. Therefore, cycling occurs.
b. We start with the initial tableau of part a, and pivot about the 1, 4th element to obtain
4 0 0 1 32 4 36 0 2 1 0 0 4 32 15 0 00100101 3 0 0 0 4 72 33 0
Pivoting about the 2, 5th element, we get
12 8 0 1 0 8 84 0 12 14 0 0 1 38 154 0 00100101 1 1 0 0 0 2 18 0
Pivoting about the 1, 6th element, we get
32 1 0 18 0 116 18 0 364 1 32 1 1 18 0 2 3 0 14 0
Pivoting about the 2, 1th element, we get
0 2 0 1 24 12034 16 0 2 1 1 24 01054 32
1 212 0 0 316 0 0 212 1 0 3 0
160 0 3 0 0 6 1 0 3 0
135
Pivoting about the 3, 2th element, we get
00100101 1 0 1 14 8 0 9 1 0112 12 120312 001234 20 0612
Pivoting about the 3, 4th element, we get
00100101 112340 2 015234 0 2 1 1 24 0 6 1 0 32 54 0 2 0 212 54
The reduced cost coecients are all nonnegative. Hence, the optimal solution to the problem is 34, 0, 0, 1, 0, 1, 0. The corresponding optimal cost is 54.
16.19
a. We have
Ad0Ax1x00bb00.
b. From our discussion of moving from one BFS to an adjacent BFS, we deduce that
d0 yq . eqm
In other words, the first m components of d0 are y1q , . . . , ymq , and all the other components are 0 except the qth component, which is 1.
16.20
The following is a MATLAB function that implements the simplex algorithm.
function x,vsimplexc,A,b,v,options
SIMPLEXc,A,b,v;
SIMPLEXc,A,b,v,options;
xSIMPLEXc,A,b,v;
xSIMPLEXc,A,b,v,options;
x,vSIMPLEXc,A,b,v;
x,vSIMPLEXc,A,b,v,options;
SIMPLEXc,A,b,v solves the following linear program using the
Simplex Method:
min cxsubject to Axb, x0,
where A b is in canonical form, and v is the vector of indices of
basic columns. Specifically, the vith column of A is the ith
standard basis vector.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results default is no display: 0.
OPTIONS5 specifies how the pivot element is selected;
0choose the most negative relative cost coefficient;
1use Blands rule.
Hence, d0 2 N A.
136
if nargin5
options;
if nargin4
dispWrong number of arguments.;
return; end
end
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
nlengthc;
mlengthb;
cBcv:;
rccBA; row vector of relative cost coefficients
costcBb;
tablA b;r cost;
if print,
disp ;
dispInitial tableau:;
disptabl;
end if
while ones1,nrzerosn,1n
if options50;
rq,qminr;
else
Blands rule
q1;
while rq0
qq1; end
end if
minratioinf;
p0;
for i1:m,
if tabli,q0
if tabli,n1tabli,qminratio
minratiotabli,n1tabli,q;
pi; end if
end if
end for
if p0
dispProblem unbounded;
break;
end if
tablpivottabl,p,q;
137
if print,
dispPivot point:;
dispp,q;
dispNew tableau:;
disptabl;
end if
vpq;
rtablm1,1:n;
end while
xzerosn,1;
xv:tabl1:m,n1;
The above function makes use of the following function that implements pivoting:
function MnewpivotM,p,q
MnewpivotM,p,q
Returns the matrix Mnew resulting from pivoting about the
p,qth element of the given matrix M.
for i1:sizeM,1,
if ip
Mnewp,:Mp,:Mp,q;
else
Mnewi,:Mi,:Mp,:Mi,qMp,q;
end if
end for
We now apply the simplex algorithm to the problem in Example 16.2, as follows:
A1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1;
b4;6;8;
c2;5;0;0;0;
v3;4;5;
options11;
x,vsimplexc,A,b,v,options;
Initial Tableau: 101004 010106 110018
25 0 0 0 0
Pivot point:
22 New tableau:
101004 010106 1 0 0 1 1 2 2 0 0 5 0 30
Pivot point: 31
New tableau:
0 0 1 1 1 2 010106 1 0 0 1 1 2 0 0 0 3 2 34
dispx; 26200
138
dispv; 321
As indicated above, the solution to the problem in standard form is 2,6,2,0,0, and the objective function value is 34. The optimal cost for the original maximization problem is 34.
16.21
The following is a MATLAB routine that implements the twophase simplex method, using the MATLAB function from Exercise 16.20.
function x,vtpsimplexc,A,b,options
TPSIMPLEXc,A,b;
TPSIMPLEXc,A,b,options;
xTPSIMPLEXc,A,b;
xTPSIMPLEXc,A,b,options;
x,vTPSIMPLEXc,A,b;
x,vTPSIMPLEXc,A,b,options;
TPSIMPLEXc,A,b solves the following linear program using the
twophase simplex method:
min cxsubject to Axb, x0.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results default is no display: 0.
OPTIONS5 specifies how the pivot element is selected;
0choose the most negative relative cost coefficient;
1use Blands rule.
if nargin4
options;
if nargin3
dispWrong number of arguments.;
return;
end end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
nlengthc;
mlengthb;
Phase I
if print,
disp ;
dispPhase I;
disp;
end
vnonesm,1;
for i1:m
vivii;
139
end
x,vsimplexzerosn,1;onesm,1,A eyem,b,v,options;
if allvn,
Phase II
if print
disp ;
dispPhase II;
disp;
dispBasic columns:
dispv
end
Convert A b into canonical augmented matrix
BinvinvA:,v;
ABinvA;
bBinvb;
x,vsimplexc,A,b,v,options;
if print
disp ;
dispFinal solution:;
dispx;
end else
assumes nondegeneracy
dispTerminating: problem has no feasible solution.;
end
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
A1 1 1 0; 5 3 0 1;
b4;8;
c3;5;0;0;
options11;
format rat;
tpsimplexc,A,b,options;
Phase I
Initial Tableau: 1110104 5 3 0 1 0 1 8
641 1 0 012 Pivot point:
21 New tableau:
0 25 1 15 1 15 125 1 35 015 0 1585 0 25 1 15 0 65 125
Pivot point: 13
New tableau:
0 25 1 15 1 15 125 1 35 015 0 1585 0011
140
Pivot point: 22
New tableau:
230 1131 13 43
53 1 013 0 1383 0011
Pivot point: 14
New tableau:
2 0 3 1 31 4
1110104 0011
Basic columns: 42
Phase II
Initial Tableau:
2 0 3 1 4
11104 2 0 5 0 20
Final solution: 0404
16.22
The following is a MATLAB function that implements the revised simplex algorithm.
function x,v,Binvrevsimpc,A,b,v,Binv,options
REVSIMPc,A,b,v,Binv;
REVSIMPc,A,b,v,Binv,options;
xREVSIMPc,A,b,v,Binv;
xREVSIMPc,A,b,v,Binv,options;
x,v,BinvREVSIMPc,A,b,v,Binv;
x,v,BinvREVSIMPc,A,b,v,Binv,options;
REVSIMPc,A,b,v,Binv solves the following linear program using the
revised simplex method:
min cxsubject to Axb, x0,
where v is the vector of indices of basic columns, and Binv is the
inverse of the basis matrix. Specifically, the vith column of
A is the ith column of the basis vector.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results default is no display: 0.
OPTIONS5 specifies how the pivot element is selected;
0choose the most negative relative cost coefficient;
1use Blands rule.
if nargin6
options;
if nargin5
dispWrong number of arguments.;
return; end
141
end
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
nlengthc;
mlengthb;
cBcv:;
y0Binvb;
lambdaTcBBinv;
rclambdaTA; row vector of relative cost coefficients
if print,
disp ;
dispInitial revised tableau v B1 y0:;
dispv Binv y0;
dispRelative cost coefficients:;
dispr;
end if
while ones1,nrzerosn,1n
if options50;
rq,qminr;
else
Blands rule
q1;
while rq0
qq1; end
end if
yqBinvA:,q;
minratioinf;
p0;
for i1:m,
if yqi0
if y0iyqiminratio
minratioy0iyqi;
pi; end if
end if
end for
if p0
dispProblem unbounded;
break;
end if
if print,
dispAugmented revised tableau v B1 y0 yq:
dispv Binv y0 yq;
dispp,q:;
dispp,q;
end
augrevtablpivotBinv y0 yq,p,m2;
142
Binvaugrevtabl:,1:m;
y0augrevtabl:,m1;
vpq;
cBcv:;
lambdaTcBBinv;
rclambdaTA; row vector of relative cost coefficients
if print,
dispNew revised tableau v B1 y0:;
dispv Binv y0;
dispRelative cost coefficients:;
dispr;
end if
end while
xzerosn,1;
xv:y0;
The function makes use of the pivoting function in Exercise 16.20.
We now apply the simplex algorithm to the problem in Example 16.2, as follows:
A1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1;
b4;6;8;
c2;5;0;0;0;
v3;4;5;
Binveye3;
options11;
x,v,Binvrevsimpc,A,b,v,Binv,options;
Initial revised tableau v B1 y0: 31004 40106 50018
Relative cost coefficients:
25 0 0 0
Augmented revised tableau v B1 y0 yq: 310040 401061 500181
p,q: 22
New revised tableau v B1 y0: 31004 20106
5 0 1 1 2
Relative cost coefficients:
2 0 0 5 0
Augmented revised tableau v B1 y0 yq: 310041 201060
5 0 1 1 2 1
p,q: 31
New revised tableau v B1 y0: 3 1 1 1 2 20106
1 0 1 1 2
143
Relative cost coefficients: 00032
dispx; 26200
dispv; 321
dispBinv;
1 11
010 0 1 1
16.23
The following is a MATLAB routine that implements the twophase revised simplex method, using the MATLAB function from Exercise 16.22.
function x,vtprevsimpc,A,b,options
TPREVSIMPc,A,b;
TPREVSIMPc,A,b,options;
xTPREVSIMPc,A,b;
xTPREVSIMPc,A,b,options;
x,vTPREVSIMPc,A,b;
x,vTPREVSIMPc,A,b,options;
TPREVSIMPc,A,b solves the following linear program using the
twophase revised simplex method:
min cxsubject to Axb, x0.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results default is no display: 0.
OPTIONS5 specifies how the pivot element is selected;
0choose the most negative relative cost coefficient;
1use Blands rule.
if nargin4
options;
if nargin3
dispWrong number of arguments.;
return;
end end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
nlengthc;
mlengthb;
Phase I
if print,
disp ;
dispPhase I;
disp;
144
end
vnonesm,1;
for i1:m
vivii;
end
x,v,Binvrevsimpzerosn,1;onesm,1,A eyem,b,v,eyem,options;
Phase II
if print
disp ;
dispPhase II;
disp;
end
x,v,Binvrevsimpc,A,b,v,Binv,options;
if print
disp ;
dispFinal solution:;
dispx;
end
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
A4 2 1 0; 1 4 0 1;
b12;6;
c2;3;0;0;
options11;
format rat;
tprevsimpc,A,b,options;
Phase I
Initial revised tableau v B1 y0:
5 1 012
6016 Relative cost coefficients:
56 1 1 0 0
Augmented revised tableau v B1 y0 yq:
5 1 0 12 2
60164 p,q:
22
New revised tableau v B1 y0:
5 1 12 9
2 014 32
Relative cost coefficients:
720 1 12032
Augmented revised tableau v B1 y0 yq:
5 1 12 9 72
2 014 32 14
p,q:
11
New revised tableau v B1 y0:
12717187
2 11427 67
145
Relative cost coefficients: 000011
Phase II
Initial revised tableau v B1 y0:
12717187
2 11427 67
Relative cost coefficients:
0 51447
Final solution:
187 670 0
17. Duality
17.1
Since x andare feasible, we have Axb, x0, and Ac, 0. Postmultiplying both sides of
Ac byx0yields
SinceAxband 0,wehaveAxb. Hence,bcx.
17.2
The primal problem is:
Axcx.
minimize en p subject to GpPem
p0,
where Ggi,j, en1,,1 with n components, and pp1,,pn. The dual of the problem is
using symmetric duality:
17.3
maximize subject to
Pem Gen 0.
a. We first transform the problem into standard form:
The initial tableau is:
minimize 2132 subject to x122x34 21x2x45
x1,x2,x3,x4 0.
12104
21015 2 3 0 0 0
We now pivot about the 1, 2th element to get:
12 1 12 0 2
32 01213 120 32 06
146
Pivoting now about the 2, 1th element gives:
01 23 131 1013 23 2 0 0 43 13 7
Thus, the solution to the standard form problem is x12, x21, x30, x40. The solution to the original problem is x12, x21.
b. The dual to the standard form problem is
maximize subject to
4152
1222 2123 1,2 0.
From the discussion before Example 17.6, it follows that the solution to the dual is cIrI43, 13.
17.4
The dual problem is
maximize subject to
1118273 512234 12233 1,2,3 0.
Note that we may arrive at the above in one of two ways: by applying the asymmetric form of duality, or by applying the symmetric form of duality to the original problem in standard form. From the solution of Exercise 16.11a, we have that the solution to the dual is cBB10,53,23 using the proof of the duality theorem.
17.5
We represent the primal in the form
The corresponding dual is
that is,
maximize subjectto
10001
The solution to the dual can be obtained using the formula, cBB1, where
minimize subject to
maximize subject to
217233
c x Axb x0.
b
Ac,
h i262 1 1 0 037 h i 1 2 3 41 2 0 1 05 1 2 0 0 0 .
h i 262 1 137 cB 1 2 0 and B41 2 05.
147
100
Note that because the last element in cB is zero, we do not need to calculate the last row of B1 when computing , that is, these elements are dont care elements that we denote using the asterisk. Hence,
Note that as expected.
17.6
1 h i260 0 137 h icBB 1 2 040 12 1250 1 2.
cxb13,
a. Multiplying the objective function by 1, we see that the problem is of the form of the dual in the asymmetric form of duality. Therefore, the dual to the problem is of the form of the primal in the asymmetric form:
minimizeb
subject to Ac
0
b. The given vector y is feasible in the dual. Since b0, any feasible point in the dual is optimal. Thus, y is optimal in the dual, and the objective function value for y is 0. Therefore, by the Duality Theorem, the primal also has an optimal feasible solution, and the corresponding objective function value is 0. Since the vector 0 is feasible in the primal and has objective function value 0, the vector 0 is a solution to the primal.
17.7
We introduce two sets of nonnegative variables: xi0, xi0, i1, 2, . . . , 3. We can then represent the optimization problem in the form
minimize x1 x1 x2 x2 x3 x3
2 x 1 3
6 6 x 2 7 7 637 2 01 0 0 1 0 6 6 x 1 7 7 1
4 x 2 5 x3
subjectto 1 1 1 1 1 1
We form the initial tableau,
xi0 , x i0 .
x1 1 0
c 1
We next calculate the reduced cost coecients,
x1 1 0
x2 x3 x1 x2 x3 b 1 1 1 1 1 2 1 0 0 1 0 1 1 1 1 1 1 0
c 1
There is no apparent basic feasible solution. We add the second row to the first one to obtain,
x2 x3 x1 x2 x3 b 0 1 1 0 1 3 1 0 0 1 0 1 1 1 1 1 1 0
x1 x2 x3 x1 x2 x3 b 1 0 1 1 0 1 3 0 1 0 0 1 0 1
c 0 2 2 2 0 0 4 148
We have zeros under the basic columns. The reduced cost coecients are all nonnegative. The optimal
solution is,
xh3 0 0 0 1 0i. The optimal solution to the original problem is x3, 1, 0 .
The dual of the above linear program is
maximize 212
subjectto h1 2i 1 1 1 1 1 1 h1 1 1 1 1 1i. 0 1 0 0 1 0
The optimal solution to the dual is
h1 1i 1 11
17.8
a. The dual asymmetric form is
01h1 2i.
maximize
subject to ai1, i1,,n.
cB B1
We can write the constraint as
Therefore, the solution to the dual problem is
min1ai : i1,,n1an. 1an.
b. Duality Theorem: If the primal problem has an optimal solution, then so does the dual, and the optimal values of their respective objective functions are equal.
By the duality theorem, the primal has an optimal solution, and the optimal value of the objective function is 1an. The only feasible point in the primal with this objective function value is the basic feasible solution 0,,0,1an.
c. Suppose we start at a nonoptimal initial basic feasible solution, 0, . . . , 1ai, . . . , 0, where 1in1. The relative cost coecient for the qth column, q 6 i, is
rq1aq . ai
Since anaj for any j 6 n, rq is the most negative relative cost coecient if and only if qn. 17.9
a. By asymmetric duality, the dual is given by minimize
subject to ci, i1,,n.
b. The constraint in part a implies thatis feasible if and only if c4. Hence, the solution is c4.
c. By the duality theorem, the optimal objective function value for the given problem is c4. The only solution thatachievesthisvalueisx4 1andxi 0foralli64.
149
17.10
a. The dual is
where e1,,1 and zx,y. b. The dual to the artificial problem is:
maximize subject to
minimize0 subject to Ac
0.
b. By the duality theorem, we conclude that the optimal value of the objective function is 0. The only
vector satisfying x0 that has an objective function value of 0 is x0. Therefore, the solution is x0.
c. The constraint set contains only the vector 0. Any other vector x satisfying x0 has at least one positive component, and consequently has a positive objective function value. But this contradicts the fact that the optimal solution has an objective function value of 0.
17.11
a. The artificial problem is:
minimize subject to
0, ez A,Izb z0,
b
A0 e.
c. Suppose the given original linear programming problem has a feasible solution. By the FTLP, the original LP problem has a BFS. Then, by a theorem given in class, the artificial problem has an optimal feasible solution with y0. Hence, by the Duality Theorem, the dual of the artificial problem also has an optimal feasible solution.
17.12
a. Possible. This situation arises if the primal is unbounded, which by the Weak Duality Lemma implies that the dual has no feasible solution.
b. Impossible, because the Duality Theorem requires that if the primal has an optimal feasible solution, then so does the dual.
c. Impossible, because the Duality Theorem requires that if the dual has an optimal feasible solution, then so does the primal. Also, the Weak Dual Lemma requires that if the primal is unbounded i.e., has a feasible solution but no optimal feasible solution, then the dual must have no feasible solution.
17.13
To prove the result, we use Theorem 17.3 Complementary Slackness. Since 0, we have Acc. Hence,is a feasible solution to the dual. Now, cAxx0. Therefore, by Theorem 17.3, x andare optimal for their respective problems.
17.14
To use the symmetric form of duality, we need to rewrite the problem as
minimize cuv, subject to Auvb
u, v0, 150
which we represent in the form
uv 0 . By the symmetric form of duality, the dual is:
minimize subject to
c cuv,
A A uvb
maximize subject to
b
A Ac c 0.
Note that for the constraint involving A, we have
AAc c , Ac andAc
Therefore, we can represent the dual as
17.15
,
minimize subject to
Ac.
b
Ac 0.
3132 1221 2121 1,2 0.
The corresponding dual can be written as:
maximize subject to
To solve the dual, we refer back to the solution of Exercise 16.11. Using the idea of the proof of the duality theorem Theorem 17.2, we obtain the solution to the dual as cBB113,13. The cost of the dual problem is 2, which verifies the duality theorem.
17.16
The dual to the above linear program asymmetric form is maximize0
subject to Oc.
The above dual problem has a feasible solution if and only if c0. Since any feasible solution to the dual is also optimal, the dual has an optimal solution if and only if c0. Therefore, by the duality theorem, the primal problem has a solution if and only if c0.
If the solution to the dual exists, then the optimal value of the objective function in the primal is equal to that of the dual, which is clearly 0. In this case, 0 is optimal, since c00.
17.17
Consider the primal problem
minimize subject to
151
0 x Axb x0,
and its corresponding dual
17.18
a. The dual is
maximize yb subject to yA0
y0.
: By assumption, there exists a feasible solution to the primal problem. Note that any feasible solution is also optimal, and has objective function value 0. Suppose y satisfies Ay0 and y0. Then, y is a feasible solution to the dual. Therefore, by the Weak Duality Lemma, by0.
: Note that the feasible region for the dual is nonempty, since 0 is a feasible point. Also, by assumption, 0 is an optimal solution, since any other feasible point y satisfies byb00. Hence, by the duality theorem, the primal problem has an optimal feasible solution.
maximize yb subject to yA0.
b. The feasible set of the dual problem is always nonempty, because 0 is clearly guaranteed to be feasible. c. Suppose y is feasible in the dual. Then, by assumption, by0. But the point 0 is feasible and has
objective function value 0. Hence, 0 is optimal in the dual.
d. By parts b and c, the dual has an optimal feasible solution. Hence, by the duality theorem, the primal
problem also has an optimal feasible solution.
e. By assumption, there exists a feasible solution to the primal problem. Note that any feasible solution in the primal has objective function value 0 and hence so does the given solution. Suppose y satisfies Ay0. Then, y is a feasible solution to the dual. Therefore, by weak duality, by0.
17.19
Consider the primal problem
and its corresponding dual
minimize subject to
maximize subject to
yb yA0 y0
0 x Axb.
: By assumption, there exists a feasible solution to the dual problem. Note that any feasible solution is also optimal, and has objective function value 0. Suppose y satisfies Ay0 and y0. Then, y is a feasible solution to the primal. Therefore, by the Weak Duality Lemma, by0.
: Note that the feasible region for the primal is nonempty, since 0 is a feasible point. Also, by as sumption, 0 is an optimal solution, since any other feasible point y satisfies byb00. Hence, by the duality theorem, the dual problem has an optimal feasible solution.
17.20
Let e1, . . . , 1 . Consider the primal problem minimize
0 x
Axe
ey yA0 y0.
152
and its corresponding dual
subject to
maximize subject to
: Suppose there exists Ax0. Then, the vector x0xminAxi is a feasible solution to the primal problem. Note that any feasible solution is also optimal, and has objective function value 0. Suppose y satisfies Ay0, y0. Then, y is a feasible solution to the dual. Therefore, by the Weak Duality Lemma, ey0. Since y0, we conclude that y0.
: Suppose 0 is the only feasible solution to the dual. Then, 0 is clearly also optimal. Hence, by the duality theorem, the primal problem has an optimal feasible solution x. Since Axe and e0, we get Ax0.
17.21
a. Rewrite the primal as
By asymmetric duality, the dual is
minimize subject to
maximize subject to
e x
PIx0 x0.
0
PIe.
b. To make the notation simpler, we rewrite the dual as: maximize 0
subject to PIye.
Suppose the dual is feasible. Then, there exists a y such that P yyey. Let yi be the largest element of y, and pi the ith row of P. Then, piyyi. But, by definition of yi, yyie. Hence, piyyipieyi, which contradicts the inequality piyyi. Hence, the dual is not feasible.
c. The primal is certainly feasible, because 0 is a feasible point. Therefore, by part b and strong duality, the primal must also be unbounded.
d. Because 0 is an achievable objective function value it is the objective function value of 0, and the problem is unbounded, we deduce that 1 is also achievable. Hence, there exists a feasible x such that xe1. This proves the desired result.
17.22
Write the LP problem
and the corresponding dual problem
minimize subject to
maximize subject to
c x Axb x0
b
Ac 0.
By a theorem on duality, if we can find feasible points x andfor the primal and dual, respectively, such that cxb, then x andare optimal for their respective problems. We can rewrite the previous set
of relations as
2c b 3 203 6c b7607
6A 0 7 x 6b7. 6In 0 7607
40 A5 4c5 0Im 0
153
Therefore,writingtheaboveasAyb,whereA2R2m2n2mnandb2R2m2n2,wehavethat the first n components of 2m2n2, mn, A , b is a solution to the given linear programming problem.
17.23
a. Consider the dual; b does not appear in the constraint but it does appear in the dual objective function. Thus, provided the level sets of the dual objective function do not exactly align with one of the faces of the constraint set polyhedron, the optimal dual vector will not change if we perturb b very slightly. Now, by the duality theorem, zbb. Becauseis constant in a neighborhood of b, we deduce that rzb.
b. By part a, we deduce that the optimal objective function value will change by 3b1.
17.24
a. Weak duality lemma: if x0 and y0 are feasible points in the primal and dual, respectively, then f1x0f2y0.
Proof: Because y00 and Ax0b0, we have y0 Ax0b0. Therefore, f1x0f1x0y0 Ax0b
Now, we know that
where xAy0. Hence,
10 x0 y0 Ax0 y0 b. 2
10 x0y0 Ax01xxy0 Ax, 22
10 x0y0 Ax01y0 AAy0y0 AAy01y0 AAy0. 222
Combining this with the above, we have f1x0
Alternatively, notice that
f1x0f2y0
b. Suppose f1x0f2y0 for feasible points x0 and y0. Let x be any feasible point in the primal. Then, by part a, f1xf2y0f1x0. Hence x0 is optimal in the primal.
Similarly, let y be any feasible point in the dual. Then, by part a, f2yf1x0f2y0. Hence y0 is optimal in the dual.
18. NonSimplex Methods
18.1
The following is a MATLAB function that implements the ane scaling algorithm.
function x,Naffscalec,A,b,u,options;
AFFSCALEc,A,b,u;
AFFSCALEc,A,b,u,options;
1y0 AAy0y0 b 2
f2 x0 .
10 x01y0 AAy0by0
22
10 x01y0 AAy0x0 Ay0 22
0.
1kx0Ay0k2 2
154
xAFFSCALEc,A,b,u;
xAFFSCALEc,A,b,u,options;
x,NAFFSCALEc,A,b,u;
x,NAFFSCALEc,A,b,u,options;
AFFSCALEc,A,b,u solves the following linear program using the
affine scaling Method:
min cxsubject to Axb, x0,
where u is a strictly feasible initial solution.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results default is no display: 0.
OPTIONS2 is a measure of the precision required for the final point.
OPTIONS3 is a measure of the precision required cost value.
OPTIONS14max number of iterations.
OPTIONS18alpha.
if nargin5
options;
if nargin4
dispWrong number of arguments.;
return; end
end xnewu;
if lengthoptions14
if options140
options141000lengthxnew;
end
else
options141000lengthxnew;
end
if lengthoptions18
options180.99; optional step size
end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
epsilonxoptions2;
epsilonfoptions3;
maxiteroptions14;
alphaoptions18;
nlengthc;
mlengthb;
for k1:maxiter,
xcurrxnew;
Ddiagxcurr;
155
AbarAD;
PbareyenAbarinvAbarAbarAbar;
dDPbarDc;
if dzerosn,1,
nonzdfindd0;
rminxcurrnonzd.dnonzd;
else
dispTerminating: d0;
break; end
xnewxcurralphard;
if print,
dispIteration number k
dispk;print iteration index k
dispalphak ;
dispalphar;print alphak
dispNew point ;
dispxnew; print new point
end if
if normxnewxcurrepsilonxnormxcurr
dispTerminating: Relative difference between iterates ;
dispepsilonx;
break;
end if
if abscxnewxcurrepsilonfabscxcurr,
dispTerminating: Relative change in objective function;
dispepsilonf;
break;
end if
if kmaxiter
dispTerminating with maximum number of iterations;
end if
end for
if nargout1
xxnew;
if nargout2
Nk;
end else
dispFinal point ;
dispxnew;
dispNumber of iterations ;
dispk;
end if
We now apply the ane scaling algorithm to the problem in Example 16.2, as follows:
A1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1;
b4;6;8;
c2;5;0;0;0;
u2;3;2;3;3;
options10;
options2107;
options3107;
156
affscalec,A,b,u,options;
Terminating: Relative difference between iterates
1.0000e07
Final point
2.0000e00 6.0000e00 2.0000e00 1.0837e09 1.7257e08
Number of iterations
8
The result obtained after 8 iterations as indicated above agrees with the solution in Example 16.2: 2, 6, 2, 0, 0.
18.2
The following is a MATLAB routine that implements the twophase ane scaling method, using the MAT LAB function from Exercise 18.1.
function x,Ntpaffscalec,A,b,options
March 28, 2000
TPAFFSCALEc,A,b;
TPAFFSCALEc,A,b,options;
xTPAFFSCALEc,A,b;
xTPAFFSCALEc,A,b,options;
x,NTPAFFSCALEc,A,b;
x,NTPAFFSCALEc,A,b,options;
TPAFFSCALEc,A,b solves the following linear program using the
TwoPhase Affine Scaling Method:
min cxsubject to Axb, x0.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results default is no display: 0.
OPTIONS2 is a measure of the precision required for the final point.
OPTIONS3 is a measure of the precision required cost value.
OPTIONS14max number of iterations.
OPTIONS18alpha.
if nargin4
options;
if nargin3
dispWrong number of arguments.;
return;
end end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
nlengthc;
mlengthb;
Phase I
if print,
disp ;
157
dispPhase I;
disp;
end
urandn,1;
vbAu;
if vzerosm,1,
uaffscalezeros1,n,1,A v,b,u 1,options;
un1;
end
if print
disp
dispInitial condition for Phase II:
dispu
end
if un1options2,
Phase II
un1;
if print
disp ;
dispPhase II;
disp;
dispInitial condition for Phase II:;
dispu;
end
x,Naffscalec,A,b,u,options;
if nargout0
dispFinal point ;
dispx;
dispNumber of iterations ;
dispN;
end if else
dispTerminating: problem has no feasible solution.;
end
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
A1 1 1 0; 5 3 0 1;
b4;8;
c3;5;0;0;
options10;
tpaffscalec,A,b,options;
Terminating: Relative difference between iterates
1.0000e07
Terminating: Relative difference between iterates
1.0000e07
Final point
4.0934e09 4.0000e00 9.4280e09 4.0000e00
Number of iterations
7
The result obtained above agrees with the solution in Example 16.5: 0, 4, 0, 4.
18.3
The following is a MATLAB routine that implements the ane scaling method applied to LP problems of the form given in the question by converting the given problem in Karmarkars artificial form and then using the MATLAB function from Exercise 18.1.
158
function x,Nkaraffscalec,A,b,options
KARAFFSCALEc,A,b;
KARAFFSCALEc,A,b,options;
xKARAFFSCALEc,A,b;
xKARAFFSCALEc,A,b,options;
x,NKARAFFSCALEc,A,b;
x,NKARAFFSCALEc,A,b,options;
KARAFFSCALEc,A,b solves the following linear program using the
Affine Scaling Method:
min cxsubject to Axb, x0.
We use Karmarkars artificial problem to convert the above problem into
a form usable by the affine scaling method.
The second variant allows a vector of optional parameters to be
defined:
OPTIONS1 controls how much display output is given; set
to 1 for a tabular display of results default is no display: 0.
OPTIONS2 is a measure of the precision required for the final point.
OPTIONS3 is a measure of the precision required cost value.
OPTIONS14max number of iterations.
OPTIONS18alpha.
if nargin4
options;
if nargin3
dispWrong number of arguments.;
return;
end end
clc;
format compact;
format short e;
optionsfoptionsoptions;
printoptions1;
nlengthc;
mlengthb;
Convert to Karmarkars aftificial problem
x0onesn,1;
l0onesm,1;
u0onesn,1;
v0onesm,1;
AA
c b zeros1,n zeros1,m cx0bl0;
A zerosm,m zerosm,n eyem bAx0v0;
zerosn,n A eyen zerosn,m cAl0
;
bb0; b; c;
cczeros2m2n,1; 1;
y0x0; l0; u0; v0; 1;
y,Naffscalecc,AA,bb,y0,options;
159
if ccyoptions3,
xy1:n;
if nargout0
dispFinal point ;
dispx;
dispFinal cost ;
dispcx;
dispNumber of iterations ;
dispN;
end if else
dispTerminating: problem has no optimal feasible solution.;
end
We now apply the above MATLAB routine to the problem in Example 15.15, as follows:
c3;5;
A1 5; 2 1; 1 1;
b40;20;12;
options2104;
karaffscalec,A,b,options;
Terminating: Relative difference between iterates
1.0000e04
Final point
5.1992e00 6.5959e00
Final cost
4.8577e01
Number of iterations
3
The solution from Example 15.15 is 5,7. The accuracy of the result obtained above is disappointing. We believe that the inaccuracy here may be caused by our particularly simple numerical implementation of the ane scaling method. This illustrates the numerical issues that must be dealt with in any practically useful implementation of the ane scaling method.
18.4
a. Suppose TxTy. Then, TixTiy for i1,,n1. Note that for i1,,n, TixxiaiTn1x and TiyyiaiTn1y. Therefore,
TixxiaiTn1xTiyyiaiTn1yyiaiTn1x, which implies that xiyi, i1,,n. Hence xy.
b. Lety2x2:xn1 0. Henceyn1 0. Definexx1,,xn byxi aiyiyn1,i1,,n. Then, T xy. To see this, note that
Tn1x1yn1yn1. y1yn1 ynyn1 1 y1 yn yn1
Also, for i1,,n,
c. An immediate consequence of the solution to part b.
d. We have
and, for i1,,n,
Tn1a 1
a1a1 anan 1
1 , n1
Tixyiyn1Tn1xyi.
TiaaiaiTn1a1 n1
.
160
e. Since yTx, we have that for i1,,n, yixiaiyn1. Therefore, x0iyiaixiyn1, which implies that x0yn1x. Hence, Ax0yn1Axbyn1.
18.5
Letx2Rn,andyTx. Letai betheithcolumnofA,i1,,n. Asinthehint,letA0 begivenby A0a1a1,,anan,b.
Then,
Axb,Axb0 26137
, a ,,a ,b6 . 70 1 n 64 x n 75
1
26 x1a1 37
, a a ,,a a ,b6 . 70
1 1 n n 64 xn an 75
1 26 x1a1yn1 37
, A06 .
64 xn an yn1 75
70 The result follows from Exercise 18.5 by setting A : c and b : 0.
18.7
Considerthesetx2Rn :ex1,x0,x1 0,whichcanbewrittenasx2Rn :Axb,x0, where e 1
Ae, b0, 1
with e1,,1, e11,0,,0. Let a0en. By Exercise 12.20, the closest point on the set x:Axbtothepointa0 is
x AAA1bAa0a0 0, 1 ,, 1 . n1 n1
Sincex 2x:Axb,x0x:Axb,thepointx isalsotheclosestpointontheset x : Axb,x0 to the point a0.
Let rka0xk. Then, the sphere of radius r is inscribed in . Note that
rka0xkp 1 . p
nn1
Hence, the radius of the largest sphere inscribed inis larger than or equal to 1 nn1. It remains to show that this largest radius is lpess than or equal to 1 nn1. To this end, we show that this largest radiusislessthanorequalto1 nn1forany0. Forthis,itsucestoshowthatthesphereof
18.6
yn1 , A0y0.
161
radius 1pnn1 is not inscribed in . To show this, consider the point xxxa0
kx a0k p
x nn1x a0
rn1 1 1 x,p ,,pp
It is easy to verify that the point x above is on the sphere of radiups 1 nn1. However, clearly the first component of x is negative. Therefore, the sphere of radius 1 nn1 is not inscribed in . Our proof is thus completed.
18.8
We first consider the constraints. We claim that x 2, x2 . To see this, note that if x 2 , then
ADx 0. To see this, write
Since eD1x0, we have Ax0 , ADx 0. Finally, we claim that if x is an optimal solution to the original problem, then x Ux is an optimal solution to the transformed problem. To see this, recall that the problem in a Karmarkars restricted problem, and hence by Assumption B we have cx0. We now note that the minimum value of the objective function cDxin the transformed problem is zero. This is because cDx cxeD1x, and eD1x0. Finally, we observe that at the point, x Ux the objective function value for the transformed problem is zero. Indeed,
cDx cDD1xeD1x0. Therefore, the two problems are equivalent.
18.9
Let v 2 Rm1 be such that vB0. We will show that
vA 0 e
and hence v0 by virtue of the assumption that
rankA m1.
n nn1 nn1
.
ex1 and hence
which means that x2 . The same argument can be used for the converse. Next, we claim Ax0 ,
ex eD1xeD1x1, AxADD1xADx eD1x.
e
vu vm1
where u 2 Rm constitute the first m components of v. Then,
vBuADvm1e0.
Postmultiplying the above by e, and using the facts that Dex0, Ax00, and een, we get uAx0vm1nvm1n0,
This in turn gives us the desired result. To proceed, write v as
162
which implies that vm10. Hence, uAD0, which after postmultiplying by D1 gives uA0.
Hence,
which implies that v0. Hence, rank Bm1.
18.10
We proceed by induction. For k0, the result is true because x0a0. Now suppose that xk is a strictly interior point of . We first show that x k1 is a strictly interior point. Now,
vA 0, e
k1 k xa0rc.
k1, we have
k Then, since2 0,1 and kc
kx
Since r is the radius of the largest sphere inscribed in , x k1 is a strictly interior point of . To complete
k1 k a0krkc kr.
the proof, we write
We already know that xk1 2 . It therefore remains to show that it is strictly interior, i.e., xk10.
xk1U1x k1Dkx k1 k e Dk x k1
To see this, note that eDkx k10. Furthermore, we can write
2 x k1 xk 3 k1 6 1 . 1 7
64 . . 75 . k1 k
D k x
0 by the induction hypothesis, and x k1x 1 ,x n
x n xn above, xk10 and hence it is a strictly interior point of .
19. Integer Linear Programming
19.1
k k Since xkx1 ,,xn
k1 k1
0 by the
The result follows from the simple observation that if M is a submatrix of A, then any submatrix of M is also a submatrix of A. Therefore, any property involving all submatrices of A also applies to all submatrices of M.
19.2
The result follows from the simple observation that any submatrix of A is the transpose of a submatrix of A, and that the determinant of the transpose of a matrix equals the determinant of the original matrix.
19.3
The claim that A is totally unimodular if A,I is totally unimodular follows from Exercise 19.1. To show the converse, suppose that A is totally unimodular. We will show that any pp invertible submatrix of A, I , pminm, n, has determinant 1. We first note that any pp invertible submatrix of A, Ithat consists only of columns of A has determinant 1 because A is totally unimodular. Moreover, any pp invertible submatrix of I has determinant 1.
Consider now a pp invertible submatrix of A, I composed of k columns of A and pk columns of I. Without loss of generality, suppose that this submatrix is composed of the first p rows of A,I, the last k columns of A, and the first pk columns of I. This choice of rows and columns is without loss of generality because we can exchange rows and columns to arrive at this form, and each exchange only changes the sign of the determinant. We now proceed as in the proof of Proposition 19.1.
19.4
The result follows from these properties of determinants: 1 that exchanging columns only changes the sign 163
of the determinant; 2 the determinant of a block triangular matrix is the product of the determinants of the diagonal blocks; and 3 the determinant of the identity matrix is 1. See also Exercise 2.4.
19.5
19.6
The vectors x and z together satisfy
which means that zbAx. Because the righthand side involves only integers, z is an integer vector.
Axzb, The following MATLAB code generates the figures.
The vertices of the feasible set
x25 1; 25 253; 1;
X0 0 x1 2.5;
Y0 3 x2 0;
fs16; Fontsize
Draw the fesible set for x1 x2 in R.
viconvhullX,Y;
plotX,Y, o;
axis on; axis equal;
axis0.2 4.2 0.2 3.2;
hold on
fill Xvi, Yvi, b,facealpha, 0.2;
text.1,.5,fontsize48Omega,position, 1.5 1.25
setgca,Fontsize,fs
hold off
The optimal solution has to be one of the extreme points
c3 4;
Draw the feasible set for the noninteger problem
figure
axis on; axis equal;
x0.5:0.1:x1;
y1x0.43;
y2x2.5;
fs16;Fontsize
plotx,y1,b,x,y2,b,LineWidth,2;
axis0.2 4.2 0.2 3.2;
setgca,Fontsize,fs
hold on
Xzeros1,4 ones1,3 2ones1,3 3;
Y0:maxfloorY 0:maxfloorY1 0:maxfloorY1 1;
plotX,Y,bls,LineWidth,2,
MarkerEdgeColor,k,
MarkerFaceColor,g,
MarkerSize,10
Plot of the cost function
xc1:0.5:5;
yc0.75xc144ones1,lengthxc;
yc00.75xc17.54ones1,lengthxc;
fs16;Fontsize
plotxc, yc, r, xc, yc0, ok, LineWidth,2;
setgca,Fontsize,fs
text.1,.5,fontsize48Omega,position, 1.5 1.25
,XminmincX; Y;
strsprintfThe maximizer is d, d and the maximum is .4f,
XXmin, YXmin, cXXmin; YXmin;
dispstr;
164
19.7
It suces to show the following claim: If we introduce the equation
Xn
j m1
into the original constraint, then the result holds. The reason this suces is that the Gomory cut is obtained by subtracting this equation from an equation obtained by elementary row operations on A,b hence is equivalent to premultiplication by an invertible matrix.
To show the above claim, let xn1 satisfy this constraint with an integer vector x. Then,
xi
byijcxj xn1 byi0c
Xn
j m1
byijcxj xn1 byi0c,
xi
xn1byi0cxi
which implies that
Because the righthand side involves only integers, xn1 is an integer.
19.8
follows by induction on the number of Gomory cuts, using Exercise 19.7 at each inductive step.
19.9
The result follows from Exercises 19.5 and 19.8.
19.10
The dual problem is:
byijcxj.
If there is only one Gomory cut, then the result follows directly from Exercise 19.7. The general result
Xn
j m1
312 subjectto 21223
11224 5
1,2 0 1,2 2Z.
minimize
55
The problem is solved graphically using the same approach as in Example 19.5. We proceed by calculating the extreme points of the feasible set. We first assume that 1,2 2 R. The extreme points are calculated intersecting the given constraints, and they are:
15, 5, 215,0. 22
In Figure 24.10, we show the feasible setfor the case when 1 , 2 2 R.
Next we sketch the feasible set for the case when 1,2 2 Z and solve the problem graphically. The
graphical solution is depicted in Figure 24.11. We can see in Figure 24.11 that the optimal integer solution
is
The following MATLAB code generates the figures.
x 6,2.
The vertices of the feasible set are:
x25 25;1 253; 4;
X7.5 x1 20 20;
165
Figure 24.10 Feasible setfor the case when 1 , 2 2 R in Example 19.5.
18 16 14 12 10
8 6 4 2 0
0 5 10 15
Figure 24.11 Real feasible set with 1,2 2 Z. 166
Y0 x2 0 40;
fs16;Fontsize
Now we draw the set Omega, supposing x1 x2 in R.
viconvhullX,Y;
plotX1:2,Y1:2, rX, LineWidth,4;
axis on; axis equal;
axis.2 18 0.2 18;
setgca,Fontsize,fs
titleFeasible set supossing x1, x2 in R.,Fontsize,14,Fontname,Avantgarde;
hold on
fill Xvi, Yvi, b,facealpha, 0.2;
text.1,.5,fontsize48Omega,position, 12 5
hold off
We now the optimal solution has to be one of the extreme points.
c3 1;
Now we draw the real feasible set for the problem.
figure
axis on; axis equal;
axis.2 18 0.2 18;
setgca,Fontsize,fs
titleFeasible set and cost function,Fontsize,14,Fontname,Avantgarde;
hold on
X; Y;
for i1:18
j0;
while ji42.5j18
ifj7.5i
XX i;
YY j; end
jj1; end
end
plotX,Y,bls,LineWidth,1,
MarkerEdgeColor,k,
MarkerFaceColor,g,
MarkerSize,10
x5:0.1:18;
y1x42.5;
y27.5x;
plotx,y1,b,x,y2,b,LineWidth,2;
setgca,Fontsize,fs
text.1,.5,fontsize48Omega,position, 12 5
Plot of the cost function at level 17.5
xc1:0.5:18;
yc352ones1,lengthxc3xc;
plotxc, yc, dk, LineWidth,2;
Plot of the cost function
xc1:0.5:18;
yc20ones1,lengthxc3xc;
167
plotxc, yc, r, LineWidth,2;
,XminmincX; Y;
strsprintfThe minimizer is d, d and the maximum is .4f,
XXmin, YXmin, cXXmin; YXmin;
dispstr;
168
20. Problems with Equality Constraints
20.1
The feasible set consists of the points We next find the gradients:
All feasible points are not regular because at the above points the gradients of h and g are not linearly independent. There are no regular points of the constraints.
20.2
a. As usual, let f be the objective function, and h the constraint function. We form the Lagrangian lx, fxhx, and then find critical points by solving the following equations Lagrange condition:
We obtain
Dlx, 0. 437 26137 26437
07 6 x2 7 6 57 57 637667 . 05415 435
0 2 6
x165, 110, 3425, 275, 65.
2622037 Lx , FxHx 42 6 05.
000
Txy2R3 :1 2 0y0 405
xa2, a1. rhx2x1 2 and rgx
0 . 3212
The unique solution to the above system is
Note that x is a regular point. We now apply the SOSC. We compute
The tangent plane is
a54, 58, 1 : a 2 R. Let ya54,58,1 2 Tx, a 6 0. We have
yLx,y75a20. 32
Therefore, x is a strict local minimizer.
b. The Lagrange condition for this problem is
4210 2x22x20
x21x290. 169
26 2 62
2 0 6 0 0 0 2 0 0 5
1 2 0 0 0
60 41 4
0
Dxlx, 0,
We have four points satisfying the Lagrange condition:
x13, 0, x23, 0, x32, p5,
x42, p5,
Note that all four points x1, . . . , x4 are regular. We now apply the SOSC. We have
Let ya0,1 2 Tx1, a 6 0. Then
yLx1,1y2a20.
Hence, x1 is a strict local minimizer. For the second point, we have
Hence, x2 is a strict local minimizer. For the third point, we have
Lx3, 3
T x3 Let yap5,2 2 Tx3, a 6 0. Then
123 223 31 41.
For the first point, we have
Lx,0 02 0, 02 02
Txy : 21,2x2y0. Lx1, 143 0
0 23 Tx1a0,1 : a 2 R.
3
0 0.2 0
00
ap5, 2 : a 2 R.
Lx2, 243
0 103
yLx3, 3y10a20. Hence, x3 is a strict local maximizer.
For the fourth point, we have
Tx4 Let yap5,2 2 Tx4, a 6 0. Then
Lx4, 4
2 0 00
ap5,2 : a 2 R.
yLx4, 4y10a20. Hence, x4 is a strict local maximizer.
170
c. The Lagrange condition for this problem is
We have four points satisfying the Lagrange condition:
x11p2, 12p2, x21p2, 12p2, x31p2, 12p2, x41p2, 12p2,
114 214 314 414.
x22x10
x18x20 x214x210.
Note that all four points x1, . . . , x4 are regular. We now apply the SOSC. We have Lx,0 12 0,
Note that
10 08 Txy : 21,8x2y0.
Lx,1412 11 2
Lx, 1412 1 . 12
After standard manipulations, we conclude that the first two points are strict local maximizers, while the last two points are strict local minimizers.
20.3
We form the lagrangian
The Lagrange conditions take the form,
lx, axbx11x222x3.
010 01
26 x21 37
41 x3 1 25
203 x22
64075
abbaxhrxh1x rxh2xi26 0 1 0 37 26 1 0 37
rxl
41 0 1541 15
hx
x1x20. x2x3 0
It is easy to see that x0 and 0 satisfy the Lagrange, FONC, conditions. 171
The Hessian of the lagrangian is
2601037 Lx,ab ba 41 0 15
010
8 26 1 37 9 Tx: y : ya 41 5 , a 2 R ; .
1 To verify if the critical point satisfies the SOSC, we evaluate
yLx, y4a20. Thus the critical point is a strict local maximizer.
20.4
By the Lagrange condition, xx1, x2 satisfies
x10
x1440.
20.5
2x x02x0 kxk29,
and the tangent space
Eliminatingwe get
which implies that x143. Therefore, rfx43,163.
3140 a. The Lagrange condition for this problem is:
where2 R. Rewriting the first equation we get 1xx0, which when combined with the second equation gives two values for 1: 123 and 123. Hence there are two solutions to the
p1 2p Lagrange condition: x1321, 3, and x2321, 3.
b. We have Lxi , i 1i I . To apply the SONC Theorem, we need to check regularity. This is easy, since the gradient of the constraint function at any point x is 2x, which is nonzero at both the points in part a.
For the second point, 1223, which implies that the point is not a local minimizer because the SONC does not hold.
On the other hand, the first point satisfies the SOSC since 1123, which implies that it is a strict local minimizer.
20.6
a. Let x1, x2, and x3 be the dimensions of the closed box. The problem is minimize 2x1x2x2x3x3x1
subject to x1x2x3V.
We denote fx2x1x2 x2x3 x3x1, and hxx1x2x3 V. We have rfx22 x3,x1x3, x1x2 and rhxx2x3, x1x3, x1x2. By the Lagrange condition, the dimensions of the box with minimum surface area satisfies
2bcbc0 2acac0 2abab0
abcV, 172
where2 R.
b. Regularity of x means rhx 6 0 since there is only one scalar equality constraint in this case. Since xa, b, c is a feasible point, we must have a, b, c 6 0 for otherwise the volume will be 0. Hence, rhx 6 0, which implies that x is regular.
c. Multiplying the first equation by a and the second equation by b, and then subtracting the first from the
second, we obtain:
Since c 6 0 see part b, we conclude that ab. By a similar procedure on the second and third equations,
cab0.
we conclude that bc. Hence, substituting into the fourth constraint equation, we obtain
abcV 13, d. The Hessian of the Lagrangian is given by
with 4V 13.
Lx, 42c 0
2 b 37 26 022 37 26 0 1 1 37 2a542 0 252 41 0 15 . 0 2 2 0 1 1 0
26 0 2 c 2b 2a
The matrix Lx, is not positive definite there are several ways to check this: we could use Sylvesters criterion, or we could compute the eigenvalues of Lx , , which are 2, 2, 4. Therefore, we need to compute the tangent space Tx. Note that
Dhxrhxbc, ac, abV 231, 1, 1. Txy:Dhxy0y:1,1,1y0y:y3 y1 y2.
Hence,
Lety2Tx,y60. Notethateithery1 60ory2 60. Wehave,
260 1 137
yLx, y2y 41 0 15 y4y1y2y1y3y2y3.
110
yLx, y4y1y2y1y1y2y2y1y24y12y1y2y24zQz
Substituting y3y1y2, we obtain wherezy1,y2 60and
Q 1 12 0. 12 1
Therefore, yLx, y0, which shows that the SOSC is satisfied. An alternative simpler calculation:
260 1 137
yLx,y2y41 0 15y2y1y2 y3y2y1 y3y3y1 y2.
110
Substituting y1y2y3, y2y1y3, and y3y1y2 in the first, second, and third terms,
respectively, we obtain
yLx, y2y12y2y20. 173
20.7
a. We first compute critical points by applying the Lagrange conditions. These are:
2x12x10 6x12x20 1230
x21 x2 x23 16 There are six points satisfying the Lagrange condition:
x1p632, 0, 12 , x2p632, 0, 12 ,
x30, 0, 4, x40, 0, 4, x50, p5756, 16 , x60, p5756, 16 ,
0.
1 1
2 1
318
418
5 3
63.
All the above points are regular. We now apply second order conditions to establish their nature. For this,
we compute
26 2 0 0 37 26 2 0 0 37 Fx40 6 05, Hx40 2 05,
000 002 Txy 2 R3 : 21,22,2x3y0.
26 0 0 0 37 Lx1,140 4 0 5
0 0 2
T x1ap63, b, a : a, b 2 R.
Let yap63, b, a 2 T x1, where a and b are not 8both zero. Then
and
For the first point, we have
: 0 if abp2 yLx1,1y4b22a2 0 ifabp2.
From the above, we see that x1 does not satisfy the SONC. Therefore, x1 cannot be an extremizer. Performing similar calculations for x2, we conclude that x2 cannot be an extremizer either.
For the third point, we have
26740 037 Lx3,34 0 234 0 5
0 0 14 Tx3a,b,0 : a,b 2 R.
Let ya,b,0 2 Tx3, where a and b are not both zero. Then yLx3,3y7a223b20.
0 ifabp2
44 174
Hence, x3 is a strict local minimizer. Performing similar calculations for the remaining points, we conclude that x4 is a strict local minimizer, and x5 and x6 are both strict local maximizers.
b. The Lagrange condition for the problem is:
2161420 22411220
324x1x262140 0. We represent the first two equations as
26 4×10 . 4 212 x2 0
From the constraint equation, we note that x0, 0 cannot satisfy the Lagrange condition. Therefore, the determinant of the above matrix must be zero. Solving foryields two possible values: 17 and 12. We then have four points satisfying the Lagrange condition:
x12, 4, x22, 4, x32p14, p14, x42p14, p14,
117 217 312 412.
Applying the SOSC, we conclude that x1 and x2 are strict local minimizers, and x3 and x4 are strict local maximizers.
20.8
a. We can represent the problem as
and
Let y3,2 2 Tx1,6 0. We have
minimize f x subject to hx0,
where fx21324, and hxx1x26. We have Dfx2,3, and Dhxx2,x1. Note that 0 is not a feasible point. Therefore, any feasible point is regular. If x is a local extremizer, then by the Lagrange multiplier theorem, there exists2 R such that DfxDhx0, or
220 310.
Solving, we get two possible extremizers: x13, 2, with corresponding Lagrange multiplier 11, and x23, 2, with corresponding Lagrange multiplier 21.
b. We have F xO, and
First, consider the point x13, 2, with corresponding Lagrange multiplier 11. We have
Lx1, 1 0 1 , 10
Hx0 1. 10
Tx1y:2,3y03,2 :2R.
yLx1, 1y1220. 175
Therefore, by the SOSC, x13, 2 is a strict local minimizer.
Next, consider the point x23, 2, with corresponding Lagrange multiplier 21. We have
Lx2, 20 1 . 10
and
Tx2y : 2,3y03,2 :2 RTx1. Let y3,2 2 Tx2,6 0. We have
yLx2, 2y1220. Therefore, by the SOSC, x23, 2 is a strict local maximizer.
c. Note that fx18, while fx216. Therefore, x1, although a strict local minimizer, is not a global minimizer. Likewise, x2, although a strict local maximizer, is not a global maximizer.
20.9
We observe that fx1,x2 is a ratio of two quadratic functions, that is, we can represent fx1,x2 as fx1,x2 xQx.
xPx
Therefore, if a point x is a maximizer of fx1,x2 then so is any nonzero multiple of this point because
txQtxt2xQxxQx. txP tx t2xP x xP x
Thus any nonzero multiple of a solution is also a solution. To proceed, represent the original problem in an equivalent form,
maximize xQx18218x1x2122 subjectto xPx2x212x21.
Thus,wewishtomaximizefx1,x218x218x1x212x2 subjecttotheequalityconstraint,hx1,x2 1221220. We apply the Lagranges method to solve the problem. We form the Lagrangian function,
lx, fh, compute its gradient and find critical points. We have,
rxlrx x 18 4x 1x 2 0x!! 4 12 0 2
218 422 0x0.4 12 0 2
We represent the above in an equivalent form,
0I22 0118 41Ax0.
02 412
That is, solving the problem is being reduced to solving an eigenvalueeigenvector problem,
I29 2!x9 2 x0. 26 26
176
The characteristic polynomial is
det9 2 2 1550510.
Thus,
24 21 p0.1 2
1
2 6
The eigenvalues are 5 and 10. Because we are interested in finding a maximizer, we conclude that the value of the maximized function is 10, while the corresponding maximizer corresponds to an appropriately scaled, to satisfy the constraint, eigenvector of this eigenvalue. An eigenvector can easily be found by taking any nonzero column of the adjoint matrix of
10I29 2. 2 6
Performing simple manipulations gives
adj1 24 2.
is a maximizer for the equivalent problem. Any multiple of the above vector is a solution of the original maximization problem.
20.10
We use the technique of Example 20.8. First, we write the objective function in the form xQx, where QQ3 2.
23
The characteristic polynomial of Q is 265, and the eigenvalues of Q are 1 and 5. The solutions to
the problem are the unit length eigenvectors of Q corresponding to the eigenvalue 5, which are 1, 1p2. 20.11
Consider the problem
minimize kAxk2 subject to kxk21.
The optimal objective function value of this problem is the smallest value that kyk2 can take. The above can be solved easily using Lagrange multipliers. The Lagrange conditions are
xAAx0 1xx0.
The first equation can be rewritten as AAxx, which implies thatis an eigenvalue of AA. Moreover, premultiplying by x yields xAAxxx, which indicates that the Lagrange multiplier is equal to the optimal objective function value. Hence, the range of values that kykkAxk can take is 1 to p20.
20.12
Consider the following optimization problem we need to use squared norms to make the functions dieren tiable:
minimize subject to
kAxk2 kxk21.
177
As usual, write fxkAxk2 and hxkxk21. We have rfx2AAx and rhx2x. Note that all feasible solutions are regular. Let x be an optimal solution. Note that the optimal value of the objective function is fxkAk2. The Lagrange condition for the above problem is:
2AAx2x0 kxk21.
From the first equation, we see that
which implies thatis an eigenvalue of AA, and x is the corresponding eigenvector. Premultiplying the
above equation by x and combining the result with the constraint equation, we obtain xAAxkAxk2fxkAk2.
Therefore, because x minimizes fx, we deduce thatmust be the largest eigenvalue of AA; i.e.,
1. Therefore,
20.13
Lethx1xPx0. Letx0 besuchthathx00. Then,x0 60. Forx0 tobearegularpoint,we need to show that rhx0 is a linearly independent set, i.e., rhx0 6 0. Now, rhx2Px. Since P is nonsingular, and x0 6 0, then rhx0 2P x0 6 0.
20.14
Note that the point 1, 1 is a regular point. Applying the Lagrange multiplier theorem gives a20
AAxx,
kAk21.
p
b20.
a. Denote the solution by x1,x2. The Lagrange condition for this problem has the form
Hence, ab. 20.15
x 222 x 1 x 12 x 2x 12 x 22
000 .
From the first and third equations it follows that x1 , x2 6 0. Then, combining the first and second equations, we obtain 2x x
21 21 22
which implies that 22x22x12. Hence, x21, and by the third Lagrange equation, x121. Thus, the only two points satisfying the Lagrange condition are 1,1 and 1,1. Note that both points are regular.
b. Consider the point x1, 1. The corresponding Lagrange multiplier is 12. The Hessian of theLagrangianis 0 1 12 0 1 1
Lx, 1 0 2 0 21 1 . Txy:2,2y0a,a :a2R.
Let y 2 Tx, y 6 0. Then, ya,a for some a 6 0. We have yLx,y2a20. Hence, SONC does not hold in this case, and therefore x1, 1 cannot be local minimizer. In fact, the point is a strict local maximizer.
The tangent plane is given by
178
c. Consider the point x1, 1. The corresponding Lagrange multiplier is 12. The Hessian of the
Lagrangian is
Lx,0 112 01 1. 1 0 2 0 2 1 1
The tangent plane is given by
Lety2Tx,y60. Then,ya,aforsomea60. WehaveyLx,y2a2 0. Hence,bythe
Txy:2,2y0a,a :a2R. SOSC, the point x1, 1 is a strict local minimizer.
20.16
a. The point x is the solution to the optimization problem minimize 1kxx0k2
2 subject to Ax0.
Since rank Am, any feasible point is regular. By the Lagrange multiplier theorem, there exists2 Rm
such that
Postmultiplying both sides by x and using the fact that Ax0, we get
b. From part a, we have
Premultiplying both sides by A we get
xx0x0.
xx0A. Ax0AA
xx0A0.
from which we conclude that AA1Ax0. Hence,
xx0Ax0AAA1Ax0InAAA1Ax0.
20.17
a. The Lagrange condition is omitting all superscript for convenience: AxbAC0
Cxd.
For simplicity, write QAA, which is positive definite. From the first equation, we have
xQ1AbQ1C. Multiplying boths sides by C and using the second equation, we have
from which we obtain
Substituting back into the equation for x, we obtain
dCQ1AbCQ1C,
CQ1C1CQ1Abd.
xQ1AbQ1CCQ1C1CQ1Abd. 179
b. Rewrite the objective function as
1xAAxbAx1kbk2.
22
As before, write QAA. Completing the squares and setting yxQ1Ab, the objective function
can be written as
Hence, the problem can be converted to the equivalent QP:
minimize 1yQy 2
1yQyconst. 2
The solution to this QP is
Hence, the solution to the original problem is:
subject to CydCQ1Ab.
yQ1CCQ1C1dCQ1Ab.
xQ1AbQ1CCQ1C1dCQ1Ab
AA1AbAA1CCAA1C1dCAA1Ab,
which agrees with the solution obtained in part a.
20.18
Write f x1 x Qxc xd actually, we could have ignored d and hxbAx. We have 2
The Lagrange condition is
From the first equation we get
DfxxQc, DhxA. xQc A0
bAx0. xQ1Ac.
Multiplying both sides by A and using the second equation constraint, we get AQ1AAQ1cb.
Since Q0 and A is of full rank, we can write
AQ1A1bAQ1c.
L is positive semidefinite on M
, forally2M, yLy0
, for all x 2 Rm, BxLBx0 , forallx2Rm, xBLBx0 , forallx2Rm, xLMx0
, LM0.
Hence,
Alternatively, we could have rewritten the given problem in our usual quadratic programming form with
xQ1cQ1AAQ1A1bAQ1c. Clearly,wehaveMRB,i.e.,y2Mifandonlyifthereexistsx2Rm suchthatyBx. Hence
variable yxQ1c. 20.19
180
For positive definiteness, the same argument applies, withreplaced by . 20.20
a. By simple manipulations, we can write Therefore, the problem is
x2a2x0abu0bu1. minimize 1u2u2
201
subject to a2x0abu0bu10.
Alternatively, we may use a vector notation: writing uu0, u1, we have minimize f u
subject to hu0,
where f u1 kuk2 , and hua2 x0ab, bu. Since the vector rhuab, b is nonzero for any u,
2
then any feasible point is regular. Therefore, by the Lagrange multiplier theorem, there exists2 R such that
u0ab0 u1b0
a2x0abu0bu10.
We have three linear equations in three unknowns, that upon solving yields
u a3x0 , u a2x0 . 0 b1a2 1 b1a2
b. The Hessians of f and h are F uI2 22 identity matrix and HuO, respectively. Hence, the Hessian of the Lagrangian is Lu,I2, which is positive definite. Therefore, u satisfies the SOSC, and is therefore a strict local minimizer.
20.21
Letting zx2, u1, u2, the objective function is zQz, where
261 0 037 Q4012 05.
0 0 13 The linear constraint on z is obtained by writing
x2 21 u2 22u1u2, which can be written as Azb, where
A1,2,1, b4. Hence, using the method of Section 20.6, the solution is
1 1 1 26 1 371 26 13 37 z Q A AQ Ab44512 44435.
3 1
Thus, the optimal controls are u143 and u21. 181
20.22
The composite input vector is
The performance index J is J1 uu. To obtain the constraint Aub, where A 2 R13, we proceed as
follows. First, we write
Using the above, we obtain
uhu0 u1 u2i . 2
problem
To solve the above problem, we form the Lagrangian 2
1 u u 2
x2x12u1
x0 2u0 2u1.
x39
x22u2
x0 2u0 2u1 2u2. We represent the above in the format Aub as follows
h i 26 u 0 37
2 2 24u156.
u2
Thus we formulated the problem of finding the optimal control sequence as a constrained optimization
minimize subject to
Aub. lu, 1uuAub,
whereis the Lagrange multiplier. Applying the Lagrange firstorder condition yields uA0 and Aub.
From the first of the above conditions, we calculate, uA. Substituting the above into the second of
the Lagrange conditions gives
Combining the last two equations, we obtain a closedform formula for the optimal input sequence
In our problem,
AA1 b. uA AA1 b.
26 u 0 37 b26 1 37 u4u15AAA415.
u2 1
21. Problems With Inequality Constraints
182
21.1
a. We form the Lagrangian function,
lx,x21 42 421 22.
The KKT conditions take the form,
Dxlx, h2x121
From the first of the above equality, we obtain
824x2i0 11 0 22 0.
421 220 0
4x 212 x 2 20 .
We first consider the case when 0. Then, we obtain the point, x10, which does not satisfy the constraints.
The next case is when 1. Then we have to have x20 and using 4×21220 gives x2 20 and x3 20.
Forthecasewhen2,wehavetohavex1 0andweget x4p0 and x5p0.
b. The Hessian of l is When 1,
We next find the subspace
22 L2 02 0.
08 04 L0 0.
04
T Ty:h4 0iy0yah0 1i :a2R.
We then check for positive definiteness of L on T ,
yLya2h0 1i0 004a2 0.
041 Hence, x2 and x3 satisfy the SOSC to be strict local minimizers.
When 2, and
L2 0, 00
Tyah1 0i:a2R. 183
We have
Thus, x4 and x5 do not satisfy the SONC to be minimizers.
yLy2a20. In summary, only x2 and x3 are strict local minimizers.
21.2
a. We first find critical points by applying the KarushKuhnTucker conditions, which are
00
00.
21 2211 52 22101112
1 51 21 5 x 2x 21 2 5 x 12 x 25
We have to check four possible combinations.
Case 1: 10, 20 Solving the first and second KarushKuhnTucker equations yields x11, 5.
However, this point is not feasible and is therefore not a candidate minimizer. Case 2: 10, 20 We have two possible solutions:
x20.98, 4.8 22.02 1
x30.02, 0 350. 1
Both x2 and x3 satisfy the constraints, and are therefore candidate minimizers. Case 3: 10, 20 Solving the corresponding equation yields:
x40.5050, 4.9505 40.198. 1
The point x4 is not feasible, and hence is not a candidate minimizer. Case 4: 10, 20 We have two solutions:
x50.732, 2.679 513.246, 3.986 x62.73, 37.32 6188.8, 204 .
The point x5 is feasible, but x6 is not.
We are left with three candidate minimizers: x2, x3, and x5. It is easy to check that they are regular.
We now check if each satisfies the second order conditions. For this, we compute
For x2, we have
Lx, 221 0 . 02
Lx2, 22.04 0 02
T x2a0.1021, 1 : a 2 R. Let ya0.1021, 1 2 T x2 with a 6 0. Then
yLx2, 2y1.979a20.
Thus, by the SOSC, x2 is a strict local minimizer. For x3, we have
Lx3, 397.958 0 02
T x3a4.898, 1 : a 2 R. 184
Let ya4.898, 1 2 T x3 with a 6 0. Then,
yLx3, 3y2347.9a20.
Thus, x3 does not satisfy the SOSC. In fact, in this case, we have Tx3T x3, and hence x3 does not satisfy the SONC either. We conclude that x3 is not a local minimizer. We can easily check that x3 is not a local maximizer either.
For x5, we have
Lx5, 524.4919 0 02
T x5 0.
The SOSC is trivially satisfied, and therefore x5 is a strict local minimizer.
b. The KarushKuhnTucker conditions are:
2113 2223 x1 x2 x1x25 11 22 31 x2 5 1,2,3
0000000.
It is easy to verify that the only combination of KarushKuhnTucker multipliers resulting in a feasible point is 120, 30. For this case, we obtain x2.5,2.5, 0,0,5. We have
Lx,2 00. 02
Hence, x is a strict local minimizer in fact, the only one for this problem. c. The KarushKuhnTucker conditions are:
21 62 4211 22 6122122 x212x21 2x12x21 121 22 1221 22 1 1,2
000000.
It is easy to verify that the only combination of KarushKuhnTucker multipliers resulting in a feasible point is 10, 20. For this case, we obtain x914, 214, 0, 1314. We have
Lx, 2 6 60
T xa1, 1 : a 2 R. Let ya1,1 2 T x with a 6 0. Then
yLx, y14a20.
Hence, x is a strict local minimizer in fact, the only one for this problem. 185
21.3
The KarushKuhnTucker conditions are:
21212221
22 21 22×21 2x1x2 x2 1 x21x2 x21x2
0 0 0 0 0 0.
We have two cases to consider.
Case 1: 0 Substituting x2x21 into the third equation and
yields two possible points:
combining the result with the first two x11.618, 0.618 13.7889
x22.618, 0.382 20.2111.
Note that the resultingvalues violate the condition 0. Hence, neither of the points are minimizers although they are candidates for maximizers.
Case 2: 0 Subtracting the second equation from the first yields x1x2, which upon substituting into the third equation gives two possible points:
x312, 12, x412, 12.
Note that x4 is not a feasible point, and is therefore not a candidate minimizer.
Therefore, the only remaining candidate is x3, with corresponding 312 and 0. We now
check second order conditions. We have
Lx3, 0, 3 1 1
11
Tx3a1,1 : a 2 R.
Let ya1, 1 2 T x3 with a 6 0. Then
yLx3, 0, 3y4a20.
Therefore, by the SOSC, x3 is a strict local minimizer. 21.4
The optimization problem is:
minimize e n p subject to GpPem
p0,
where Ggi,j, en1,,1 with n components, and pp1,,pn. The KKT condition for this problem is:
e n 1 G 201 Pem Gp2 p0
GpPem 1,2,p0.
186
21.5
a. We have fxx2 x1 23 3 and gx12. Hence, rfx31 22,1 and rgx0, 1. The KKT condition is
0 31220 10 120
120.
The only solution to the above conditions is x2, 1, 1.
To check if x is regular, we note that the constraint is active. We have rgx0,1, which is
nonzero. Hence, x is regular. b. We have
Lx,FxGx0 0. 00
Hence, the point x satisfies the SONC.
c. Since 0,wehaveT x,Txy:0,1y0y:y2 0,whichmeansthatT
contains nonzero vectors. Hence, the SOSC does not hold at x.
21.6
a. Write fxx2, gxx2 x1 12 3. We have rfx0,1 and rgx21 1,1. The KKT conditions are:
0 2110 10 x2 x1 12 30
x2 x1 12 30.
From the third equation we get 1. The second equation then gives x11, and the fourth equation gives x23. Therefore, the only point that satisfies the KKT condition is x1, 3, with a KKT multiplier of 1.
b. Note that the constraint x2 x1 12 30 is active at x. We have Txy : 0,1y0y:y2 0,andNxy:y0,1z, z2Ry:y1 0. Because 0,wehave T xTxy:y2 0.
c. We have
From part b, Txy : y20. Therefore, for any y 2 Tx, yLx,y2y120, which means
that x does not satisfy the SONC. 21.7
a. We need to consider two optimization problems. We first consider the minimization problem
minimize x122x212 subjectto x21x20
x1 x2 20×10.
Lx,O12 02 0. 00 00
187
Then, we form the Lagrangian function
lx,x1 22 x2 12 121 x22x1 x2 231.
The KKT condition takes the form
rxlx,h2x1 2211 2 3
1x 21x 2 0 21 x2 20 310
i0.
22 11 2i0T
The point x0 satisfies the above conditions for 12, 20, and 34. Thus the point x does not satisfy the KKT conditions for minimum.
We next consider the maximization problem
minimizex122x212 subjectto x21x20
x1 x2 20×10.
The Lagrangian function for the above problem is,
lx,x1 22 x2 12 121 x22x1 x2 231.
The KKT condition takes the form
rxlx,h2x1 2211 2 3
22 11 2i0
The point x0 satisfies the above conditions for 12, 20, and 34. Hence, the point x satisfies
1x 21x 2 0 21 x2 20 310
i0.
the KKT conditions for maximum.
b. We next compute the Hessian, with respect to x, of the lagrangian to obtain
LF 1G1 2 0 4 02 00 2 0 0 0 2
which is indefinite on R2. We next find the subspace
T y:rg1xy0y: 0 1y0
rg3x 1 00,
That is, Tis a trivial subspace that consists only of the zero vector. Thus the SOSC for x to be a strict local maximizer is trivially satisfied.
21.8
a. Write hxx1x2, gxx1. We have Dfxx2,2x1x2, Dhx1,1, Dgx1,0. 188
Note that all feasible points are regular. The KKT condition is:
x20 2x1x20 x10 0 x1x20
x10.
We first try x1x10 active inequality constraint. Substituting and manipulating, we have the solution x1x20 with 0, which is a legitimate solution. If we then try x1x10 inactive inequality constraint, we find that there is no consistent solution to the KKT condition. Thus, there is only one point satisfying the KKT condition: x0.
c. The tangent space at x0 is given by
T0y : 1,1y0,1,0y00.
Therefore, the SONC holds for the solution in part a.
d. We have
Lx,, 0 22. 22 21
Hence, at x0, we have L0, 0, 0O. Since the active constraint at x0 is degenerate, we have T 0,0y : 1,1y0,
which is nontrivial. Hence, for any nonzero vector y 2 T 0,0, we have yL0,0,0y0 6 0. Thus, the SOSC does not hold for the solution in part a.
21.9
a. The KKT condition for the problem is:
AxbAe 0 x0 0 ex10 x0
where e1,,1.
b. A feasible point x is regular in this problem if the vectors e, ei, i 2 Jx are linearly independent, where Jxi : xi0 and ei is the vector with 0 in all components except the ith component, which is 1.
In this problem, all feasible points are regular. To see this, note that 0 is not feasible. Therefore, any feasible point results in the set Jx having fewer than n elements, which implies that the vectors e, ei, i 2 Jx are linearly independent.
21.10
By the KKT Theorem, there exists 0 such that
xx0rgx0
gx0. Premultiplying both sides of the first equation by xx0, we obtain
kxx0k2xx0rgx0. 189
Since kx x0k20 because gx00 and 0, we deduce that x x0rgx0 and 0. From the second KKT condition above, we conclude that gx0.
21.11
a. By inspection, we guess the point 2, 2 drawing a picture may help.
b. We write fxx1 32 x2 42, g1xx1, g2xx2, g3xx1 2, g4xx2 2,
gg1, g2, g3, g4. The problem becomes
subject to gx0.
We now check the SOSC for the point x2,2. We have two active constraints: g3, g4. Regularity holds, since rg3x1,0 and rg4x0,1. We have rfx2,4. We need to find a2 R4, 0, satisfying FONC. From the condition gx0, we deduce that 120. Hence, DfxDgx0 ifandonlyif 0,0,2,4. Now,
Fx2 0, Gx0 0. 02 00
21.12
The KKT condition is
minimize f x
Hence
which is positive definite on R2. Hence, SOSC is satisfied, and x is a strict local minimizer.
Lx, 2 0 02
xQA0 Axb0 0
Axb0. Postmultiplying the first equation by x gives
xQxAx0. We note from the second equation that Axb. Hence,
xQxb0.
Since Q0, the first term is nonnegative. Also, the second term is nonnegative because 0 and b0. Hence, we conclude that both terms must be zero. Because Q0, we must have x0.
Aside: Actually, we can deduce that the only solution to the KKT condition must be 0, as follows. The problem is convex; thus, the only points satisfying the KKT condition are global minimizers. However, we see that 0 is a feasible point, and is the only point for which the objective function value is 0. Further, the objective function is bounded below by 0. Hence, 0 is the only global minimizer.
21.13
a. We have one scalar equality constraint with hxc,dxe and two scalar inequality constraints with gxx. Hence, there exists2 R2 and2 R such that
ac1
bd2 x
cx1dx2 x
190
0000e0.
b. Because x is a basic feasible solution, and the equality constraint precludes the point 0, exactly one of the inequality constraints is active. The vectors rhxc,d and rg11,0 are linearly independent. Similarly, the vectors rhxc,d and rg20,1 are linearly independent. Hence, x must be regular.
c. The tangent space is given by
Txy2Rn :Dhxy0, Dgjxy0, j2Jx
NM,
where M is a matrix with the first row equal to Dhxc,d, and the second row is either Dg11,0
or Dg20,1. But, as we have seen in part b, rankM2 Hence, Tx0.
d. Recall that we can taketo be the relative cost coecient vector i.e., the KKT conditions are satisfied withbeing the relative cost coecient vector. If the relative cost coecients of all nonbasic variables are strictly positive, then j0 for all j 2 Jx. Hence, T x,Tx0, which implies that
Lx, , 0 on T x, . Hence, the SOSC is satisfied. 21.14
Let x be a solution. Since A is of full rank, x is regular. The KKT Theorem states that x satisfies:
0 cA0
Ax0.
If we postmultiply the second equation by x and subtract the third from the result, we get
21.15
a. We can write the LP as
cx0.
minimize f x subject to hx0, gx0,
where fxcx, hxAxb, and gxx. Thus, we have Dfxc, DhxA, and DgxI. The KarushKuhnTucker conditions for the above problem have the form: if x is a local minimizer, then there existsandsuch that
0 cA0
x0.
b. Let x be an optimal feasible solution. Then, x satisfies the KarushKuhnTucker conditions listed in part a. Since 0, then from the second condition in part a, we obtain Ac. Hence, is a feasible solution to the dual see Chapter 17. Postmultiplying the second condition in part a by x, we have
0cx Axx cx b
which gives
Hence, achieves the same objective function value for the dual as x for the primal.
c. From part a, we have c A. Substituting this into x0 yields the desired result. 191
cxb .
21.16
By definition of Jx, we have gix0 for all i 62 Jx. Since by assumption gi is continuous for all i, thereexists0suchthatgix0foralli62JxandallxinthesetBx:kxxk. Let S1 x:hx0,gjx0,j2Jx. WeclaimthatSBS1B. Toseethis,notethatclearly SBS1B. ToshowthatS1BSB,supposex2S1B. Then,bydefinitionofS1 andB,we havehx0,gjx0forallj2Jx,andgix0foralli62Jx. Hence,x2SB.
Since x is a local minimizer of f over S, and SBS, x is also a local minimizer of f over SBS1B. Hence, we conclude that x is a regular local minimizer of f on S1. Note that S0S1, and x 2 S0. Therefore, x is a regular local minimizer of f on S0.
21.17
Writefxx21x2,g1xx21x24,g2xx2x12,andgg1,g2. Wehaverfx2x1,22, rg1x21,1, rg2x1,1, and D2fxdiag2,2. We compute
rfxrgx2x1 211 2,22 1 2.
We use the FONC to find critical points. Rewriting rfxrgx0, we obtain
x12 , x212 . 221 2
We also use gx0 and 0, giving
121 x2 40, 22 x1 20.
The vectorhas two components; therefore, we try four dierent cases. Case 1: 10, 20 We have
x21 x2 40, x2 x1 20.
We obtain two solutions: x12, 0 and x23, 5 . For x1 , the two FONC equations give 12 and 22211, which yield 1245. This is not a legitimate solution since we require 0. For x2, the two FONC equations give 1210 and 32212, which yield 165, 665. Again, this is not a legitimate solution.
Case 3: 10, 20 We have
x21 x2 40, x1 0, x21.
Therefore, x24, 18, and again we dont have a legitimate solution.
Case 4: 10, 20 We have x1x20, and all constraints are inactive. This is a legitimate
candidate for the minimizer. We now apply the SOSC. Note that since the candidate is an interior point of the constraint set, the SOSC for the problem is equivalent to the SOSC for unconstrained optimization. The Hessian matrix D2fxdiag2,2 is symmetric and positive definite. Hence, by the SOSC, the point x0, 0 is the strict local minimizer in fact, it is easy to see that it is a global minimizer.
21.18
Write fxx21x2, g1xx1x24, g2xx110, and gg1,g2. We have rfx21,22, rg1x1, 22, rg2x1, 0, D2fxdiag2, 2, D2g1xdiag0, 2, and D2g2xO. We
Case 2: 10, 20 We have
x2 x1 20, x12, x2 2.
22
Hence, x1x2 , and thus x1, 1, 22. This is not a legitimate solution since we require 0.
compute
We use the FONC to find critical points. Rewriting rfxrgx0, we obtain
rfxrgx2x1 1 2,22 212.
x11 2, 2
x2110.
2
192
Since we require 0, we deduce that x20. Using gx0 gives 1140, 20.
We are left with two cases.
Case 1: 10, 20 We have x140, and 18, which is a legitimate candidate.
Case2: 1 0,2 0Wehavex1 x2 0,whichisnotalegitimatecandidate,sinceitisnota
feasible point.
We now apply SOSC to our candidate x4, 0, 8, 0. Now,
L4,0,8,02 080 02 0, 02 02 018
which is positive definite on all of R2. The point 4, 0 is clearly regular. Hence, by the SOSC, x4, 0 is a strict local minimizer.
21.19
Writefxx21x2,g1xx1x24,g2x3x2x1,g3x3x2x1 andgg1,g2,g3. Wehave rfx21, 22, rg1x1, 22, rg2x1, 3, rg2x1, 3, D2fxdiag2, 2, D2g1xdiag0, 2, and D2g2xD2g3xO.
From the figure, we see that the two candidates are x13, 1 and x23, 1. Both points are easily verified to be regular.
For x1, we have 30. Now,
Dfx1Dgx161 2,221 320,
which yields 14, 22. Now, T x10. Therefore, any matrix is positive definite on T x1. Hence, by the SOSC, x1 is a strict local minimizer.
For x2, we have 20. Now,
Dfx1Dgx161 3,221 330,
which yields 14, 32. Now, again we have T x20. Therefore, any matrix is positive definite on T x2. Hence, by the SOSC, x2 is a strict local minimizer.
21.20
a. Write fx31 and gx2x1x2. We have rfx3,0 and rgx1,0. Hence, letting 3, we have rfxrgx0. Note also that 0 and gx0. Hence, x2,0 satisfies the KKT first order necessary condition.
b. We have F xO and Gxdiag0, 2. Hence, Lx, O3 diag0, 2diag0, 6. Also, Txy : 1,0y0y : y10. Hence, x2,0 does not satisfy the second order necessary condition.
c. No. Consider points of the form xx2 2,2, x2 2 R. Such points are feasible, and could be arbitrarily close to x. However, for such points x 6 x,
fx3x2 2662 6fx. Hence, x is not a local minimizer.
21.21
The KKT condition for the problem is
0 xa0
x0. 193
Premultiplying the second KKT condition above byand using the third condition, we get akk2.
Also, premultiplying the second KKT condition above by x and using the feasibility condition axb,
we get
kx k2b 0.
We conclude that 0. For if not, the equation akk2 implies that a0, which contradicts0anda0.
Rewriting the second KKT condition with 0 yields xa.
Using the feasibility condition axb, we get
xa b .
kak2
21.22
a. Suppose x1 2×2 21. Then, the point xx1 , x2lies in the interior of the constraint set x : kxk21. Hence, by the FONC for unconstrained optimization, we have that rfx0, where fxkxa,bk2 is the objective function. Now, rfx2xa,b0, which implies that xa, b which violates the assumption x1 2×2 21.
b. First, we need to show that x is a regular point. For this, note that if we write the constraint as gxkxk210, then rgx2x 6 0. Therefore, x is a regular point. Hence, by the Karush KuhnTucker theorem, there exists2 R, 0, such that
which gives
Hence,x isunique,andwecanwritex1 a,x2 b,where110.
rfxrgx0, x 1 a.
1 b
c. Using part b and the fact that kxk1, we get kxk22ka, bk21, which gives 1ka, bk
1pa2b2.
21.23
a. The KarushKuhnTucker conditions for this problem are
2x1expx1
221
expx1 x2
expx1
0
0
0
x20.
b. From the second equation in part a, we obtain 221. Since x2expx10, then 0. Hence, by the third equation in part a, we obtain x2expx1.
c. Since 2212expx11, then by the first equation in part a, we have 212expx1 1 expx1 0
which implies
x1exp2x1expx1. 194
Since expx1,exp2x10, then x10, and hence expx1,exp2x11. Therefore, x12. 21.24
a. We rewrite the problem as
minimize f x subject to gx0,
otherwise it would not be feasible, and therefore it is a regular point. By the KKT theorem, there exists0suchthatcx andgx0. Sincec60,wemusthave 60. Therefore,gx0, which implies that kxk22. p
where fxcx and gx1kxk2 1. Hence, rfxc and rgxx. Note that x 6 0 for 2
b. Fromparta,wehave2kek2 2. Sincekek2 n,wehave 2n.
To find c, we use
4cx p8kxk2 82 8, and thus 2. Hence, c2e2 2ne.
21.25
We can represent the equivalent problem as
minimize f x
where gx1 khxk2. Note that 2
Therefore, the KKT condition is:
subject to gx0, rgxDhxhx.
0 rfxDhxhx0
khx k0.
Note that for a feasible point x, we have hx0. Therefore, the KKT condition becomes
0 rfx0.
Note that rgx0. Therefore, any feasible point x is not regular. Hence, the KKT theorem cannot be applied in this case. This should be clear, since obviously rfx0 is not necessary for optimality in general.
22. Convex Optimization Problems
22.1
The given function is a quadratic, which we represent in the form
261137 fx4 1 25x.
1 2 5
A quadratic function is concave if and only if it is negativesemidefinite. Equivalently, if and only if its
negative is positivesemidefinite. On the other hand, a symmetric matrix is positive semidefinite if and only 195
if all its principal minors, not just the leading principal minors, are nonnegative. Thus we will determine the range of the parameterfor which
26 1 1 37 fx 41 2 5 x.
1 2 5
is positivesemidefinite. It is easy to see that the three firstorder principal minors diagonal elements of Fare all positive. There are three secondorder principal minors. Only one of them, the leading principal minor, is a function of the parameter ,
detF1:!2,1:!2det1 12. 1
The above secondorder leading principal minor is nonnegative if and only if2 1, 1.
The other secondorder principal minors are
detF1: !3,1: !3 and detF2: !3,2: !3,
and they are positive. There is only one thirdorder principal minor, det F , where detFdet1 2det 1det 1
25251215221
152 221524.
The thirdorder principal minor is nonnegative if and only if, 540, that is, if and only if2 45, 0.
Combining this with2 1,1 from above, we conclude that the function f is negativesemidefinite, equivalently, the quadratic function f is concave, if and only if
22.2
We have
d2dQd0 d2
and hence by Theorem 22.5,is strictly convex.
2 45, 0.
1xdQxdxdb
21dQd2dQxb1xQxxb .
22 This is a quadratic function of . Since Q0, then
22.3
Write fxxQx, where
Q10 1. 210
196
Let x,y 2 . Then, xa1,ma1 and ya2,ma2 for some a1,a2 2 R. By Proposition 22.1, it is enough to show that yxQyx0. By substitution,
which completes the proof.
22.4
fx1y
maxfix1y i
max fix1fiy i
max fix1 max fiy ii
fx1fy
by convexity of each fi by property of max
which implies that fis convex.
22.7
yxQyxma2 a12 0,
Letx,y2and20,1. Then,hxhyc. Byconvexityof,hx1yc. Therefore, hx1yhx1hy
and so h is convex over . We also have
hx1yhx1hy,
which shows that h is convex, and thus h is concave. 22.5
At x0, for2 1, 1, we have, for all y 2 R,
y0y0y.
Thus, in this case anyin the interval 1, 1 is a subgradient of f at x0. At x1, 1 is the only subgradient of f, because, for all y 2 R,
y1y1y. 22.6
Let x,y 2and2 0,1. For convenience, write fmaxf1,,fmaxi fi. We have
: This is true by definition.
: Letd2Rn begiven. WewanttoshowthatdQd0. Now,fixsomevectorx2. Sinceis
open, there exists6 0 such that yxd 2 . By assumption,
0yxQyx2dQd
which implies that dQd0.
22.8
Yes, the problem is a convex optimization problem. Firstweshowthattheobjectivefunctionfx1kAxbk2 isconvex.Wewrite
2
fx1xAAxbAxconstant 2
which is a quadratic function with Hessian AA. Since the Hessian AA is positive semidefinite, the objective function f is convex.
Next we show that the constraint set is convex. Consider two feasible points x and y, and let2 0, 1. Then, x and y satisfy ex1,x0 and ey1,y0, respectively. We have
ex1yex1ey11. 197
Moreover, each component of x1y is given by xi1yi, which is nonnegative because every term here is nonnegative. Hence, x1y is a feasible point, which shows that the constraint set is convex.
22.9
We need to show thatis a convex set, and f is a convex function on .
To show thatis a convex set, we need to show that for any y, z 2and2 0, 1, we have y1z 2
. Lety,z2and20,1. Thus,y1 y2 0andz1 z2 0. Hence, xy1zy1 1z1
Now,
and since , 10,
y21z2
x1 y1 1z1 y2 1z2 x2, x10.
Hence, x 2and thereforeis convex.
Toshowthatf isconvexon,weneedtoshowthatforanyy,z2and20,1,fy1z fy1fz. Lety,z2and20,1. Thus,y1 y2 0andz1 z2 0,sothatfyy13 and fzz13. Also,3 and13 1. Wehave
fy1z
Hence, f is convex.
22.10
y11z1 3
3y1313z1332211y13112y12
y13 1z13 maxy1,z13 13 1
321312y131z13
fy1fz.
Since the problem is a convex optimization problem, we know for sure that any point of the form y1z,2 0,1, is a global minimizer. However, any other point may or may not be a minimizer. Hence, the largest set of points G for which we can be sure that every point in G is a global minimizer, is given by
Gy1z : 01.
22.11
a. Let f be the objective function andthe constraint set. Consider the set x 2: fx1. This set contains all three of the given points. Moreover, by Lemma 22.1,is convex. Now, if we take the average of the first two points which is a convex combination of them, the resulting point 121, 0, 0
120, 1, 0121, 1, 0 is in , 130, 0, 1131, 1, 1 is also in 131, 1, 1 must be1.
becauseis convex. Similarly, the point 23121, 1, 0, becauseis convex. Hence, the objective function value of
b. If the three points are all global minimizers, then the point 131, 1, 1, which must cannot have higher objective function value than the given three points by part a, must also be a global minimizer.
22.12
a. The Lagrange condition for the problem is given by: xQA0
From the first equation above, we obtain
Axb.
xQ1A. 198
Applying the second equation constraint on x, we have AQ1Ab.
Since rank Am, the matrix AQ1 A is invertible. Therefore, the only solution to the Lagrange condition
is
xQ1AAQ1A1b.
b. The point in part a above is a global minimizer because the problem is a convex optimization problem by problem 1, the constraint set is convex; the objective function is convex because its Hessian, Q, is positive definite.
22.13
By Theorem 22.4, for all x 2 , we have
fxfxP Dfxxx.
Substituting Dfx from the equation Dfx a0 into the above inequality yields
Observe that for each j 2 Jx, and for each x 2 ,
Hence, for each j 2 Jx,
Since j0, we get
and the proof is completed.
22.14
fxfx
j2Jx
Xj 2 Jxj j
jaj xx.
a j x b j0 ,
a j xb j0 .
aj xx0.
X
fxfx
a. Letx2Rn :axb,x1,x2 2,and20,1. Then,ax1 bandax2 b. Therefore,
ax11x2ax11ax2b1b
b
which means that x11x2 2 . Hence,is a convex set.
b. Rewrite the problem as
minimize f x subject to gx0
where fxkxk2 and gxbax. Now, rgxa 6 0. Therefore, any feasible point is regular. By the KarushKuhnTucker theorem, there exists 0 such that
2xa0 bax0.
Since x is a feasible point, then x 6 0. Therefore, by the first equation, we see that6 0. The second
equation then implies that bax0.
j2Jx
jaj xxfx
199
c. By the first KarushKuhnTucker equation, we have x a2. Since a xb, thena a2axb, and therefore 2bkak2. Since xa2 then x is uniquely given by xbakak2.
22.15
a. Letfxcxandx:x0. Supposex,y2,and20,1. Then,x,y0. Hence, x1y0,whichmeansx1y2. Furthermore,
cx1ycx1cy. Therefore, f is convex. Hence, the problem is a convex programming problem.
b. : We use contraposition. Suppose ci0 for some i. Let d0,,1,,0, where 1 appears in the ith component. Clearly d is a feasible direction for any point x0. However, drfxdcci0. Therefore, the FONC does not hold, and any point x0 cannot be a minimizer.
: Suppose c0. Let x0, and d a feasible direction at x. Then, d0. Hence, drfx0. Therefore, by Theorem 22.7, x is a solution.
The above also proves that if a solution exists, then 0 is a solution.
c. Write gxx so that the constraint can be expressed as gx0.
: We have DgxI, which has full rank. Therefore, any point is regular. Suppose a solution x
exists. Then, by the KKT theorem, there exists 0 such that c0 and x0. Hence, c 0.
: Supposec0. Letx 0and c. Then, 0,c 0,andx 0,i.e.,the KKT condition is satisfied. By part a, x is a solution to the problem.
The above also proves that if a solution exists, then 0 is a solution. 22.16
a. The standard form problem is
which can be written as
minimize subject to
minimize subject to
c x Axb x0,
f x hx0, gx0,
where fxcx, hxAxb, and gxx. Thus, we have Dfxc, DhxA, and DgxI. The KarushKuhnTucker conditions for the above problem has the form:
0 cA0
x0.
b. The KarushKuhnTucker conditions are sucient for optimality in this case because the problem is a convex optimization problem, i.e., the objective function is a convex function, and the feasible set is a convex set.
c. The dual problem is
c. Let
maximize b
subject to Ac.
cA. 200
Sinceis feasible for the dual, we have 0. Rewriting the above equation, we get cA0.
The Complementary Slackness condition cAx0 can be written as x0. Therefore, the KarushKuhnTucker conditions hold. By part b, x is optimal.
22.17
a. We can treat s1 and s2 as vectors in Rn. We have
Sa s:sx1s1 x2s2,x1,x2 2R,
Let aa, . . . , a . The optimization problem is: minimize
si a,i1,n. x1s1x2s2a.
12×2 212
b. The KKT conditions are:
subject to
x1s1 x2s2
00 0
x1s1x2s2a 0
x1s1x2s2a.
c. Yes, because the Hessian of the Lagrangian is I identity, which is positive definite.
d. Yes, this is a convex optimization problem. The objective function is quadratic, with identity Hessian hence positive definite. The constraint set is of the form Axa, and hence is a linear variety.
22.18
a. We first show that the set of probability vectors
q2Rn :q1 qn 1, qi 0, i1,,n
is a convex set. Let y, z 2 , so y1yn1, yi0, z1zn1, and zi0. Let2 0, 1 and xy1z. We have
x1xn
y1 1z1 yn 1zn y1 yn1z1 zn 1
1.
Also,becauseyi 0,zi 0,0,and10,weconcludethatxi 0. Thus,x2,whichshowsthatis convex.
b. We next show that the function f is a convex function on . For this, we compute
0 3
7 . . . 75 ,
pn qn2
which shows that Fq0 for all q in the open set q : qi0, i1,,n, which contains . Therefore, f is convex on .
2 p16 q 12
Fq 64 . . . . . . 0
201
c. Fix a probability vector p. Consider the optimization problem
minimize p1 logp1 pn logpn
x1 xn subject to x1xn1
xi 0, i1,,n
By parts a and b, the problem is a convex optimization problem. We ignore the constraint xi0 and write
down the Lagrange conditions for the equalityconstraint problem: pi 0, i1,,n
xi
x1xn1.
Rewrite the first set of equations as xipi. Combining this with the constraint and the fact that p1pn1, we obtain 1, which means that xipi . Therefore, the unique global minimizer is xp.
Note that fx0. Hence, we conclude that fx0 for all x 2 . Moreover, fx0 if and only if x0. This proves the required result.
d. Given two probability vectors p and q, the number
Dp, qp1 logp1 pn logpn
is called the relative entropy or KullbackLiebler divergence between p and q. It is used in information theory to measure the distance between two probability vectors. The result of part c justifies the use of D as a measure of distance although D is not a metric because it is not symmetric.
22.19
We claim that a solution exists, and that it is unique. To prove the first claim, choose 0 such that there exists x 2satisfying kxzk. Consider the modified problem
minimize kxzk
subject to x 2y : kyzk.
If this modified problem has a solution, then clearly so does the original problem. The objective function here is continuous, and the constraint set is closed and bounded. Hence, by Weierstrasss Theorem, a solution to the problem exists.
Let f be the objective function. Next, we show that f is convex and hence the problem is a convex optimization problem. Let x, y 2and2 0, 1. Then,
fx1ykx1yzk
kxz1yzk
kxzk1kyzkfx1fy,
which shows that f is convex.
To prove uniqueness, let x1 and x2 be solutions to the problem. Then, by convexity, x3x1x22 is
also a solution. But
k xz k x 1x 2z
q1 qn
32
x1zx2z
2 2
1kx1 zkkx2 zk
2
kx3zk,
202
from which we conclude that the triangle inequality above holds with equality, implying that x1zx2 z for some 0. Because kx1 zkkx2 zkkx1 zk, we have 1. From this, we obtain x1x2, which proves uniqueness.
22.20
a. LetA2Rnn andB2Rn besymmetricandA0,B0. Fix20,1,x2Rn,andlet C A1B. Then,
xCxxA1Bx
xAx1xBx.
Since xAx0, xBx0, and , 10 by assumption, then xCx0, which proves the required result.
b. Wefirstshowthattheconstraintsetx:F0Pnj1xjFj 0isconvex. So,letx,y2and 20,1. Letzx1y. Then,
F0
By assumption, we have
Xn j1
zjFj
Xn
F0xj 1yjFj
j1
Xn j1
Xn j1
Xn
F0 xjFj0
F0F0
Xn xjFj 1 yjFj
j1
Xn
xjFj1F0
yjFj.
j1
Xn
F0 yjFj0.
j1
Xn j1
fx1ycx1y
cx1cy
fx1fy
c. The objective function is already in the required form. To rewrite the constraint, let ai,j be the i,jth
entry of A, i1,,m, j1,,n. Then, the constraint Axb can be written as ai,11 ai,22 ai,nxn bi, i1,,m
Now form the diagonal matrices
F0diagb1,,bm
Fjdiaga1,j,,am,j, j1,,n. 203
By part c, we conclude that
F0
To show that the objective function fxcx is convex on , let x,y 2and2 0,1. Then,
which implies that z 2 .
which shows that f is convex.
zjFj 0,
j1
Note that a diagonal matrix is positive semidefinite if aPnd only if every diagonal element is nonnegative. Hence, the constraint Axb can be written as F 0nj1 xj F j0. The left hand side is a diagonal matrix, and the ith diagonal element is simply biai,11ai,22ai,nxn.
22.21
a. We have
Soletx,y2and20,1. Considerzx1y. Wehave
x:x1 xn 1; x1,,xn 0; x1 2xi, i2,,n.
z1zn
x1 1y1 xn 1yn x1 xn1y1 yn 1
1.
Moreover,foreachi,becausexi 0,yi 0,0and10,wehavezi 0. Finally,foreachi, z1 x1 1y1 2xi 12yi 2zi.
Hence, z 2 , which implies thatis convex.
b. We first show that the negative of the objective function is convex. For this, we will compute its Hessian, which turns out to be a diagonal matrix with ith diagonal entry 1x2i , which is strictly positive. Hence, the Hessian is positive definite, which implies that the negative of the objective function is convex.
Combining the above with part a, we conclude that the problem is a convex optimization problem. Hence, the FONC for set constraints is necessary and sucient. Let x be a given allocation. The FONC at x
is d rfx0 for all feasible directions d at x. But becauseis convex, the FONC can be written as P
yxrfx0 for all y 2 . Computing rfx for fx ni1 logxi, we get the proportional fairness condition.
22.22
a. We rewrite the problem into a minimization problem by multiplying the objective function by 1. Thus, the new objective function is the sum of the functions Ui. Because each Ui is concave, Ui is convex, and hence their sum is convex.
To show that the constraint set x : exC where e1,,1 is convex, let x1,x2 2 , and2 0, 1. Then, ex1C and ex2C. Therefore,
ex11x2ex11ex2C1C
C
which means that x11x2 2 . Hence,is a convex set.
b. Because the problem is a convex optimization problem, the following KKT condition is necessary and sucient for x to be a global minimizers:
xiC0 i1
Xn
0 i0i
XUx!0, i1,,n n
xiC. i1
Note that because Uixixi is a convex function of xi, the second line above can be written as xi argmaxxUixx.
204
c. Because each Ui is concave and incPreasing, we conclude that Pni1 xiC; for otherwise we could increase some xi and hence Uixiand also ni1 Uixi , contradicting the optimality of x.
22.23
First note that the optimization problem that you construct cannot be a convex problem for otherwise, the FONC implies that x is a global minimizer, which then implies that the SONC holds. Let fxx2, gxx2 x21, and x0. Then, rfx0,1. Any feasible direction d at x is of the form dd1,d2 with d20. Hence, drfx0, which shows that the FONC holds.
Becausergx0,1sox isregular,weseethatif 1,thenrfxrgx0,andso the KKT condition holds.
Because F xO, the SONC for set constraintholds. However, Lx,O2 00
00
and Txy : y20, which shows that the SONC for inequality constraint gx0 does not hold.
22.24
a. Let x0 and 0 be feasible points in the primal and dual, respectively. Then, gx00 and 00, and
so 0 gx00. Hence,
fx0fx00 gx0lx0,0
min lx,0 x2Rn
q0.
b. Suppose fx0q0 for feasible points x0 and 0. Let x be any feasible point in the primal. Then, by part a, fxq0fx0. Hence x0 is optimal in the primal.
Similarly, letbe any feasible point in the dual. Then, by part a, qfx0q0. Hence 0 is optimal in the dual.
c. Let x be optimal in the primal. Then, by the KKT Theorem, there exists2 Rm such that
rxlx,DfxDgx0 gx0
0.
Therefore,is feasible in the dual. Further, we note that l,is a convex function because f is convex, g is convex being the sum of convex functions, and hence l is the sum of two convex functions. Hence, we have lx, minx2Rn lx, . Therefore,
By part b,is optimal in the dual. 22.25
qmin lx,x2Rn
lx,
fxgxfx.
a. The Schur complement of M1,1 is
11M2: !3,2: !3M2:3,1M1,11M1,2: !3
1 2h 1i 25 1
12 2 24.
205
b. The Schur complement of M2: !3,2: !3 is
22.26
Let and let
x2 x3
P11 0,P20 1,andP30 0.
22
M1,1M1,2: !3M2: !3,2:31M2: !3,1 1h 1i1 21
25 1 1h 1i5 2
54. 2 1 1 Px1 x2
001001 Then, we can represent the Lyapunov inequality APP A0 as
where Equivalently, if and only if
22.27
AP PAx1 AP1 P1Ax2 AP2 P2A x3 AP3 P3A
x1F1 x2F2 x3F30,
Fi APi PiA, i1,2,3. PP andAPPA0 Fxx1F1 x2F2 x3F3 0.
The quadratic inequality
can be equivalently represented by the following LMI:
or as the following LMI:
AP PAPBR1BP 0 R BP 0,
PB APPA APPA PB0.
BP R
It is easy to verify, using Schur complements, that the above two LMIs are equivalent to the following
quadratic inequality:
AP PAPBR1BP O 0. O R
206
22.28
The MATLAB code is as follows:
A 0.9501 0.4860 0.4565
0.2311 0.8913 0.0185
0.6068 0.7621 0.8214;
setlmis;
Plmivar1,3 1;
lmiterm1 1 1 P,1,A,s
lmiterm2 1 1 0,0.1
lmiterm2 1 1 P,1,1
lmiterm3 1 1 P,1,1
lmiterm3 1 1 0,1
lmisgetlmis;
tmin,xfeasfeasplmis;
Pdec2matlmis,xfeas,P
23. Algorithms for Constrained Optimization
23.1
a. By drawing a simple picture, it is easy to see that xxkxk, provided x 6 0.
b. By inspection, we see that the solutions are 0,1 and 0,1. Or use Rayleighs inequality.
Hence,
12
d. Assuming x0 6 0, y0 is well defined. Hence, by part c, we can write
c. Now,
where k1kIQxkk. For the particular given form of Q, we have
2
Because 0,
which implies that yk ! 0. But
1 k 12
1 1, 12
xk1xkrfxkkxkQxkkIQxk, xk1 1xk
1k1 xk1 12xk.
2k2 yk1 1yk.
yk
y0.
1
vxk2xk2
kqxkk 12
k ! u ut q 2×2
xk 1 1 2 xk2
2
xk yk21, 2
207
which implies that
xkp 1 . 2 yk2 1
Because yk ! 0, we have xk ! 1. By the expression for xk1 in part c, we see that the sign of xk does 222
not change with k. Hence, we deduce that either xk ! 1 or xk ! 1. This also implies that xk ! 0. Hence, xk converges to a solution to the problem.2 2 1
e. If x00, then xk0 for all k, which means that xk1 or 1 for all k. In this case, the algorithm 221
is stuck at the initial condition 1, 0 or 1, 0 which are in fact the minimizers.
23.2
a. Yes. To show: Suppose that xk is a global minimizer of the given problem. Then, for all x 2 , x 6 xk, we have cxcxk. Rewriting, we obtain cxxk0. Recall that
xkrfxkarg min kxxkrfxkk2 x2
But, for any x 2 , x 6 xk,
kxxk ck2
argminkxxk ck2. x2
kxxkk2 kck2 2cxxkkck2,
where we used the facts that kxxkk20 and cxxk0. On the other hand, kxk xk ck2
kck2. Hence,
b. No. Counterexample:
23.3
xk1xkrfxkxk.
a. Suppose xk satisfies the FONC. Then, rfxk0. Hence, xk1xk. Conversely, suppose xk does not satisfy the FONC. Then, rfxk 6 0. Hence, k0, and so xk1 6 xk.
b. Case i: Suppose xk is a corner point. Without loss of generality, take xk1, 1. We can do this because any other corner point can be mapped to this point by changing variables xi to xi as appropriate. Note that any feasible direction d at xk1, 1 satisfies d0. Therefore,
xk1xk , rfxk0
, drfxk0 for all feasible d at xk
, xk satisfies FONC.
Case ii: Suppose xk is not a corner point i.e., is an edge point. Without loss of generality, take xk 2 x : x11,1×21. We can do this because any other edge point can be mapped to this point by changing variables xi to xi as appropriate. Note that any feasible direction d at xk 2 x : x11, 1×21 satisfies d10. Therefore,
xk1xk , rfxka, 0, a0
, drfxk0 for all feasible d at xk
, xk satisfies FONC. 208
23.4
By definition of , we have
By Exercise 6.7, we can write
x0 y
argminkxx0 yk x2
argminkxx0yk. x2
have
where PIAAA1A. Hence,
23.5
arg min kzykP y, z2N A
x0yx0P y.
argminkxx0ykx0 argminkzyk. x2 z2N A
The term arg minz2N A kzyk is simply the orthogonal projection of y onto N A. By Exercise 6.7, we
Sincek 0isaminimizerofkfxkPgk,weapplytheFONCtoktoobtain 0kxk PgkQPgkbPgk.
Therefore,0k0ifgkPQPgk xkQbPgk. But xkQbgk.
Hence
23.6
gkP gk
kgkP QP gk .
By Exercise 23.5, the projected steepest descent algorithm applied to this problem takes the form
xk1
Ifx0 2x:Axb,thenAx0 b,andhence
x1AAA1b which solves the problem see Section 12.3.
23.7
a. Define
By the Chain Rule,
Since k minimizes k, 0kk0, and thus gk1Pgk0. 209
kfxk Prfxk 0kkdkk
xkP xk InPxk
AAA1Axk.
d
rfxk kPrfxkPrfxkrfxk1Prfxk.
b.Wehavexk1xkkPgk andxk2xk1k1Pgk1.Therefore,
xk2xk1xk1xk byparta,andthefactthatP P P2.
23.8
k1kgk1P P gkk1kgk1P gk 0
a. minimizefxPx.
b. Suppose x 62 . Then, Px0 by definition of P. Because x is a global minimizer of the
unconstrained problem, we have
which implies that
fxPxfxPxfx,
fxfxPxfx.
We use the penalty method. First, we construct the unconstrained objective function with penalty parameter
23.9
:
Because f is a quadratic with positive definite quadratic term, it is easy to find its minimizer:
123 1 For example, we can obtain the above by solving the FONC:
fxx21 22 x1 x2 32. x1 2.
211 22 60 21 222 60.
Now letting! 1, we obtain
It is easy to verify, using other means, that this is indeed the correct solution.
23.10
Using the penalty method, we construct the unconstrained problem minimize xmaxax, 02
To find the solution to the above problem, we use the FONC. It is easy to see that the solution x satisfies xa. The derivative of the above objective function in the region xa is 12xa. Thus, by the FONC, we have xa12. Since the true solution is at a, the dierence is 12. Therefore, for 12, we need 12. The smallest suchis 12.
x21.
23.11
a. We have
1kxk2 kAxbk21x 12 2 xx 2. 2 22122
210
The above is a quadratic with positive definite Hessian. Therefore, the minimizer is x12 2 12
2 12 21 1.
212 1
lim x1 1.
Hence,
The solution to the original constrained problem is see Section 12.3
!1 21
xAAA1b1 1 .
21
b. We represent the objective function of the associated unconstrained problem as
1kxk2 kAxbk21x In 2AAxx 2Abbb. 22
The above is a quadratic with positive definite Hessian. Therefore, the minimizer is xIn2AA1 2Ab
Let AU S
1 1
2In AA Ab.
O Vbe the singular value decomposition of A. For simplicity, denote 12. We have
xInAA1Ab
S !1
InV O UUS OV Ab S2 O !1
Note that Also,
VAOSU.
Im S2 O 1 Im S21 OO IO 1I ,
InVOOV Ab I S2 O 1
V m VAb. O Inm
nm
nm
211
where ImS21 is diagonal. Hence,
InAA1Ab
V Im S21 O SU
O 1Inm O S
V O ImS21U
V OSUUIm S21U
AUImS21U.
UImS21U ! US21U.
But, Therefore,
US21UUS2U1AA1. x ! AAA1bx.
x
Note that as! 1,! 0, and
24. MultiObjective Optimization
24.1
The MATLAB code is as follows:
function multiop
MULTIOP, illustrates multiobjective optimization.
clear
clc
disp
disp This is a demo illustrating multiobjective optimization.
disp The numerical example is a modification of the example
disp from the 2002 book by A. Osyczka,
disp Example 5.1 on pages 101105
disp
disp Select the population size denoted POPSIZE, for example, 50.
disp
POPSIZEinputPopulation size POPSIZE ;
disp
disp Select the number of iterations denoted NUMITER; e.g., 10.
disp
NUMITERinputNumber of iterations NUMITER ;
disp
disp
Main
for i1:NUMITER
fprintfWorking on Iteration .0fn,i
xmatgenxmatPOPSIZE;
if i1
for j1:lengthxR
212
xmatxmat;xRj;
end
end
xR,fRSelectPxmat;
fprintfNumber of Pareto solutions: .0fn,lengthfR
end
disp
disp
fprintf Pareto solutions n
celldispxR
disp
disp
fprintf Objective vector values n
celldispfR
xlabelf1,Fontsize,16
ylabelf2,Fontsize,16
titlePareto optimal front,Fontsize,16
setgca,Fontsize,16
grid
for i1:lengthxR
xxixRi1;
yyixRi2;
end
XXxx; yy;
figure
axis1 7 5 10
hold on
for i1:sizeXX,2
plotXX1,i,XX2,i,marker,o,markersize,6
end
xlabelx1,Fontsize,16
ylabelx2,Fontsize,16
titlePareto optimal solutions,Fontsize,16
setgca,Fontsize,16
grid
hold off
figure
axis2 10 2 13
hold on
plot2 6,5 5,marker,o,markersize,6
plot6 6,5 9,marker,o,markersize,6
plot2 6,9 9,marker,o,markersize,6
plot2 2,5 9,marker,o,markersize,6
for i1:sizeXX,2
plotXX1,i,XX2,i,marker,x,markersize,10
end
x12:.2:10;
x22:.2:13;
X1, X2meshgridx1,x2;
Z1X1.2X2;
v0 5 7 10 15 20 30 40 60;
cs1contourX1,X2,Z1,v;
clabelcs1
Z2X1X2.2;
v220 25 35 40 60 80 100 120;
cs2contourX1,X2,Z2,v2;
213
clabelcs2
xlabelx1,Fontsize,16
ylabelx2,Fontsize,16
titleLevel sets of f1 and f2, and Pareto optimal
points,Fontsize,16
setgca,Fontsize,16
grid
hold off
function xmat0genxmatPOPSIZE
xmat0randPOPSIZE,2;
xmat0:,1xmat0:,142;
xmat0:,2xmat0:,245;
function xR,fRSelectPxmat
Declaration
Jsizexmat,1;
Init
Rset1;
j1;
isstep70;
Step 1
x1xmat1,:;
f1evalfcnx1;
Step 2
while jJ
jj1;
Step 3
r1;
rdel;
q0;
RlengthRset;
for k1:sizexmat,1
xkxmatk,:;
fkevalfcnxk;
end
Step 4
while 1
for r1:R
if allfjfRsetr
qq1;
rdelrdel r;
else
Step 5
if allfjfRsetr
break end
end
Step 6
214
rr1; if rR
isstep71;
break end
end
Step 7
if isstep71
isstep70;
if q0
Rsetrdel ;
RsetRset j;
else
Step 8
RsetRset j;
end
end
for k1:sizexmat,1
xkxmatk,:;
fkevalfcnxk;
end
RlengthRset;
end
Return the Pareto solution.
for i1:lengthRset
xRixRseti;
fRifRseti;
end
x1;
y1;
x2 ;
y2 ;
for k1:sizexmat,1
if ismemberk,Rset
x1x1 fk1;
y1y1 fk2;
else
x2x2 fk1;
y2y2 fk2;
end
end
newplot
plotx1,y1,xr,x2,y2,.b
drawnow
function yf1x
yx12x2;
The above function is the original function in the Osyczkas 2002
book,
Example 5.1, page 101.
Its negative makes a much more interesting example.
215
yx12x2;
function yf2x
yx1x22;
function yevalfcnx
y1f1x;
y2f2x;
24.2
a. We proceed using contraposition. Assume that x is not Pareto optimal. Therefore, there exists a point
x 2such that
fixfix for all i1,2,, and for some j, fjxfjx. Since c0,
cfxcfx,
which implies that x is not a global minimizer for the weightedsum problem.
For the converse, consider the following counterexample: x 2 R2 : kxk1,x0 and fx
x ,x . ItiseasytoseethattheParetofrontisx:kxk1,x0i.e.,thepartoftheunitcircleinthe 12p
nonnegative quadrant. So x1 21, 1 is a Pareto minimizer. However, there is no c0 such that x is a global minimizer of the weightedsum problem. To see this, fix c0 assuming cc without loss
p12
of generality and consider the objective function value fxc1 c2 2 for the weightedsum problem. Now, the point x01,0 is also a feasible point. Moreover fx0c1c1 c22fx. So x is
not a global minimizer of the weightedsum problem.
b. We proceed using contraposition. Assume that x is not Pareto optimal. Therefore, there exists a point
x 2such that
and for some j, fjxfjx. By assumption, for all i1,,, fix0, which implies that
fixfix for all i1,2,,
X i1
fixp
X i1
fixp
because p0. Hence, x is not a global minimizer for the minimumnorm problem.
For the converse, consider the following counterexample: x 2 R2 : x1222, x0 and fxx1,x2. It is easy to see that the Pareto front is x : x1 222,x0. So x1,12 is a Pareto minimizer. However, there is no p0 such that x is a global minimizer of the minimumnorm problem. To see this, fix p0 and consider the objective function value fx112p for the minimum norm problem. Now, the point x00, 1 is also a feasible point. Moreover fx01112pfx.
So x is not a global minimizer of the minimumnorm problem.
c. For the first part, consider the following counterexample: x 2 R2 : x1x22, x0 and fxx1,x2.TheParetofrontisx:x1x22,x0,andx12,34 isaParetominimizer. But
fxmaxf1x, f2xmax12, 34
34.
However, x01,1 is also a feasible point, and fx01fx. Hence, x is not a global minimizer of the minimax problem.
Forthesecondpart,supposex2R2 :x1 2andfxx1,2. Then,foranyx2, maxf1x,f2x2. So any x 2 R2 is a global minimizer of the minimax singleobjective problem.
216
However, consider another point x such that x1x1. Then, f1xf1x and f2xf2x. Hence, x is not a Pareto minimizer.
In fact, in the above example, no Pareto minimizer exists. However, if we set x 2 R2 : 1×12, then the counterexample is still valid, but in this case any point of the form 1,2 is a Pareto minimizer.
24.3
Let
fxcfx
where x 2 x : hx0. The function f is convex because all the functions fi are convex and ci 0,i1,2,,.Wecanrepresentthegivenfirstorderconditioninthefollowingform:foranyfeasible
direction d at x, we have
By Theorem 22.7, the point x is a global minimizer of f over . Therefore,
That is,
X i1
ci fix
X i1
ci fix for all x 2 .
drfx0.
fxfx for all x 2 .
To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
point x 2such that
and for some j, fjxfjx. Since for all i1,2,,, ci0, we must have
fixfix for all i1,2,,
fxcfx
i1, 2, . . . , . We can represent the given Lagrange condition in the form
DfxDgx0 hx0.
By Theorem 22.8, the point x is a global minimizer of f over . Therefore, fxfx for all x 2 .
That is,
X i1
ci fix
X i1
ci fix for all x 2 .
X
X
ci fix,
ci fix
which contradicts the above condition, Pi1 ci fixPi1 ci fix for all x 2 . This completes the
proof. See also Exercise 24.2, part a.
24.4
i1
i1
Let
where x 2 x : hx0. The function f is convex because all the functions fi are convex and ci0,
To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
point x 2such that
fixfix for all i1,2,, 217
and for some j, fjxfjx. Since for all i1,2,,, ci0, we must have
X i1
X i1
ci fix,
ci fix
which contradicts the above condition, Pi1 ci fixPi1 ci fix for all x 2 . This completes the
proof. See also Exercise 24.2, part a.
24.5
Let
where x 2 x : gx0. The function f is convex because all the functions fi are convex and ci0,
fxcfx
i1, 2, . . . , . We can represent the given KKT condition in the form
0 DfxDgx0
gx0 gx0.
By Theorem 22.9, the point x is a global minimizer of f over . Therefore, fxfx for all x 2 .
That is,
X
ci fix
X
ci fix for all x 2 .
i1
proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
i1
To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
point x 2such that
and for some j, fjxfjx. Since for all i1,2,,, ci0, we must have
fixfix for all i1,2,,
X i1
X i1
ci fix,
ci fix
which contradicts the above condition, Pi1 ci fixPi1 ci fix for all x 2 . This completes the
proof. See also Exercise 24.2, part a.
24.6
Let
where x 2 x : hx0, gx0. The function f is convex because all the functions fi are convex
fxcfx
and ci0, i1, 2, . . . , . We can represent the given KKTtype condition in the form
0 DfxDhxDgx0
gx0 hx0
gx0. By Theorem 22.9, the point x is a global minimizer of f over . Therefore,
fxfx for all x 2 . 218
That is,
X i1
ci fix
X i1
ci fix for all x 2 .
To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
point x 2such that
and for some j, fjxfjx. Since for all i1,2,,, ci0, we must have
fixfix for all i1,2,,
X
X
ci fix,
The given minimax problem is equivalent to the problem given in the hint: minimize z
subjectto fixz0, i1,2.
Suppose x,z is a local minimizer for the above problem which is equivalent to x being a local minimizer to for the original problem. Then, by the KKT Theorem, there exists 0, where2 R2, such that
ci fix
which contradicts the above condition, Pi1 ci fixPi1 ci fix for all x 2 . This completes the
proof. See also Exercise 24.2, part a.
24.7
i1
i1
0, 1 rf1x, 1 rf2x, 1
f1xz f2xz
00.
Rewriting the first equation above, we get
1rf1x2rf2x0, 121.
Rewriting the second equation, we get
ifixz0, i1,2.
Suppose fixmaxf1x, f2x, where i 2 1, 2. Then, zfix. Hence, by the above equation we conclude that i0.
219
Reviews
There are no reviews yet.