ECE Ph.D. Qualifying Exam

Automatic Control (AC)

Question 3: Optimization

August 2017




1.(20 pts) Considern the following linear program,

minimize $ 2x_{1} + x_{2} $,

subject to $ x_{1} + 3x_{2} \geq 6 $

$ 2x_{1} + x_{2} \geq 4 $

$ x_{1} + x_{2} \leq 3 $

$ x_{1} \geq 0 $, $ x_{2} \geq 0 $.

Convert the above linear program into standard form and find an initial basic feasible solution for the program in standard form.

Solution of question1:
The problem equal to:
Minimize $ 2x_1+x_2 $
Subject to $ \begin{align*} &x_1+3x_2-x_3=6\\ &2x_1+x_2-x_4=4\\ &x_1+x_2+x_5=3\\ &x_1, x_2, x_3, x_4,x_5 >=0 \end{align*} $
such that $ A= \begin{bmatrix} 1 & 3 & -1 & 0 & 0 \\ 2 & 1 & 0 & -1 & 0 \\ 1 & 1 & 0 & 0 & 1 \end{bmatrix} $
we take $ B= \begin{bmatrix} 1 & 3 & 0 \\ 2 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix} \Rightarrow B\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} =b \Rightarrow \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1 & 3 & 0 \\ 2 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix}^{-1} \begin{bmatrix} 6\\ 4\\ 3 \end{bmatrix} = \begin{bmatrix} \dfrac{6}{5} \\ \dfrac{8}{5} \\ \dfrac{1}{5} \end{bmatrix} $
Such that $ x^T=[\dfrac{6}{5}, \dfrac{8}{5},\dfrac{1}{5}, 0, 0] $ is a feasible solution.



2.(20 pts)

  • (15 pts) FInd the largest range of the step-size, $ \alpha $, for which the fixed step gradient descent algorithm is guaranteed to convege to the minimizer of the quadratic function
$ f = \frac{1}{2} x^{T}Qx - b^{T}x $

starting from an arbitary initial condition $ x^{(0)} \in \mathbb{R}^{n} $, where $ x \in \mathbb{R}^{n} $, and $ Q = Q^{T} > 0 $.

  • (5 pts) Find the largest range of the step size, $ \alpha $, for which the fixed step gradient descent algorithm is guaranteed to converge to the minimizer of the quadratic function
$ f= 6x_{1}^{2}+2x_{2}^{2}-5 $

starting from an arbitrary initial condition $ x^{(0)} \in \mathbb{R}^{n} $

Solution of question 2:
a) From the Optimization textbook, Zak Stanislaw. Lemma 8.3
For fixed step gradient descent algorithms $ \alpha $ should in the range $ (0,\dfrac{2}{\lambda max(Q)}) $
b) $ Q=\begin{bmatrix} 12 & 0 \\ 0 & 4 \end{bmatrix} $
such that $ \lambda max(Q)=12 \Rightarrow \alpha \in (0, \dfrac{1}{6}) $



3. (20 pts) Is the function

$ f(x_{1}, x_{2})=\frac{1}{(x_{1}-2)^{2} + (x_{2}+1)^{2}+3} $

locally convex, concave, or neither in the neighborhood of the point $ [2 -1]^{T} $? Justify your answer by giving all the details of your argument.

Solution of question3:
Let $ t_1=x_1-2 $, $ t_2=x_2+1 $
so that $ g(t_1,t_2)=\dfrac{1}{t_1^2+t_2^2+3}|t_1=0,t_2=0 $ would have some convex property
with $ f(x_1,x_2)=\dfrac{1}{(x_1-2)^2+(x_2+1)^2+3}|x_1=2,x_1=-1 $
$ D^2g(x)=\dfrac{1}{(t_1^2+t_2^2+3)^3}\begin{bmatrix} 6(t_1)^2-2(t_2)^3-6 & 8t_1t_2 \\ 8t_1t_2 & 6(t_2)^2-2(t_1)^3-6 \end{bmatrix}=\dfrac{1}{27}\begin{bmatrix} -6 & 0 \\ 0 & -6 \end{bmatrix} $
It is easy to see that $ \begin{bmatrix} -6 & 0 \\ 0 & -6 \end{bmatrix} $ is n.d .
Such that function at $ [2 -1]^T $ is strictly locally concave.


4. (20 pts) Solve the following optimization problem:

optimize $ x_{1}x_{2} $

subject to $ x_{1}+x_{2}+x_{3}=1 $

$ x_{1}+x_{2}-x_{3}=0 $

Solution of question4:
We form the lagrangian:
$ l(x,\lambda)=x_1x_2+\lambda_1(x_1+x_2+x_3-1)+\lambda_2(x_1+x_2-x_3) $
$ \begin{cases} \nabla_xl=\begin{bmatrix} x_2+\lambda_1+\lambda_2 \\ x_1+\lambda_1+\lambda_2 \\ \lambda_1+\lambda_2\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \\ x_1+x_2+x_3-1=0 \\ x_1+x_2-x_3=0 \end{cases} $
No valid solution for lagrangian condition
Such that the problem can not be optimized


5. (20 pts) Solve the following optimization problem:

maximize $ 14x_{1}-x_{1}^{2}+6x_{2}-x_{2}^{2}+7 $

subject to $ x_{1}+x_{2} \leq 2 $

$ x_{1}+2x_{2} \leq 3 $

Solution of question 5:
The problem equal to
Minimize $ (x_1)^2+(x_2)^2-14x_1-6x_2-7 $
Subject to $ x_1+x_2-2<=0 $ and $ x_1+2x_2-3<=0 $
Form the lagrangian function
$ l(x,\mu)=(x_1)^2+(x_2)^2-14x_1-6x_2-7+\mu_1(x_1+x_2-2)+\mu_2(x_1+2x_2-3) $
The KKT condition takes the form
$ \nabla_xl(x,\mu)=\begin{bmatrix}2x_1-14+\mu_1+\mu_2 \\ 2x_2-6+\mu_1+2\mu_2\end{bmatrix}=\begin{bmatrix}0 \\ 0\end{bmatrix} $
$ \mu_1(x_1+x_2-2)=0 $
$ \mu_2(x_1+2x_2-3)=0 $
$ \mu_1>=0 $, $ \mu_2>=0 $
$ \Rightarrow \begin{cases} \mu_1=0 & \mu_2=0 & x_1=7 & x_2=3 & wrong \\ \mu_1=0 & \mu_2=4 & x_1=5 & x_2=-1 & wrong \\ \mu_1=8 & \mu_2=4 & x_1=3 & x_2=-1 & f(x)=-33 \\ \mu_1=20 & \mu_2=-8 & x_1=1 & x_2=1 & wrong \end{cases} $
In all $ x^T=[3 -1] $ is the maximizer of original function.




Back to ECE QE page

Alumni Liaison

Recent Math PhD now doing a post-doctorate at UC Riverside.

Kuei-Nuan Lin