(6 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
[[Category:optimization]]
 
[[Category:optimization]]
  
=QE2013_AC-3_ECE580-4=
+
=QE2013_AC-3_ECE580-5=
  
 
:[[QE2013_AC-3_ECE580-1|Part 1]],[[QE2013_AC-3_ECE580-2|2]],[[QE2013_AC-3_ECE580-3|3]],[[QE2013_AC-3_ECE580-4|4]],[[QE2013_AC-3_ECE580-5|5]]
 
:[[QE2013_AC-3_ECE580-1|Part 1]],[[QE2013_AC-3_ECE580-2|2]],[[QE2013_AC-3_ECE580-3|3]],[[QE2013_AC-3_ECE580-4|4]],[[QE2013_AC-3_ECE580-5|5]]
  
<br> '''Solution: ''' <br>
+
<br> '''Solution 1: ''' <br>
  
 +
From the constraint, it can be seen that:
  
 +
<math>x_1 = x_3 = -x_2 </math>
 +
 +
Substitute into the objective function:
 +
 +
<math>f(x) = x_2 (x_1 + x_3) = -2 x_2^2 </math>
 +
 +
Therefore it has a maximizer but no minimizer (f(x) goes to <math>-\infty</math> as <math>|x_2|</math> increases)
 +
 +
The maximizer is <math>x_1 = x_2 = x_3 = 0</math>.  There f(x) reaches the maximum value of 0.
 +
 +
<br> '''Solution 2: ''' <br>
 +
 +
<math>f(x) = x_1 x_2 + x_2 x_3 \\
 +
h_1(x) = x_1 + x_2 \\
 +
h_2(x) = x_2 + x_3 \\
 +
l(x,\lambda) = f(x) + \lambda_1 h_1(x) + \lambda_2 h_2(x) = x_1 x_2 + x_2 x_3 + \lambda_1 (x_1 + x_2) + \lambda_2 (x_2 + x_3) \\
 +
\nabla l(x,\lambda) = \begin{bmatrix}
 +
  x_2 + \lambda_1  \\
 +
  x_1 + x_3 + \lambda_1 + \lambda_3 \\
 +
  x_2 + \lambda_2 \\
 +
  x_1 + x_2 \\
 +
  x_2 + x_3
 +
\end{bmatrix} = \begin{bmatrix}
 +
  0 \\
 +
  0 \\
 +
  0 \\
 +
  0 \\
 +
  0
 +
\end{bmatrix} \\
 +
\Rightarrow x^* = \begin{bmatrix}
 +
  0 \\
 +
  0 \\
 +
  0
 +
\end{bmatrix}\ \lambda^* = \begin{bmatrix}
 +
  0 \\
 +
  0
 +
\end{bmatrix} \\
 +
L(x^*,\lambda^*) = F(x^*) + \lambda_1^* H_1(x^*) + \lambda_2^* H_2(x^*) = \begin{bmatrix}
 +
  0 & 1 & 0 \\
 +
  1 & 0 & 1 \\
 +
  0 & 1 & 0
 +
\end{bmatrix} \\
 +
\begin{align}
 +
T(x^*) & = \{y: Dh(x^*)y = 0\} \\
 +
& = \{y: \begin{bmatrix}
 +
  1 & 1 & 0 \\
 +
  0 & 1 & 1
 +
\end{bmatrix} y = 0\} \\
 +
& = \{y: y = \begin{bmatrix}
 +
  1 \\
 +
  -1 \\
 +
  1
 +
\end{bmatrix} a, a \in \Re \}
 +
\end{align} \\
 +
\forall y \in T(x^*), y \ne 0: y^T L(x^*,\lambda^*) y = a \begin{bmatrix} 1 & -1 & 1 \end{bmatrix} \begin{bmatrix}
 +
  0 & 1 & 0 \\
 +
  1 & 0 & 1 \\
 +
  0 & 1 & 0
 +
\end{bmatrix} \begin{bmatrix}
 +
  1 \\
 +
  -1 \\
 +
  1
 +
\end{bmatrix} a = \begin{bmatrix} -1 & 2 & -1 \end{bmatrix} \begin{bmatrix}
 +
  1 \\
 +
  -1 \\
 +
  1
 +
\end{bmatrix} a^2 = -4 a^2 < 0 \\</math>
 +
 +
<math>\therefore x^* = \begin{bmatrix} 0 & 0 & 0 \end{bmatrix}^T</math> is a maximizer
 +
 +
<br> '''Comment: ''' Solution 2 uses the formal procedure of Lagrange Multiplier approach, which is more complicated but would apply to more general cases.  Solution 1 is not as general but is simpler for the given problem.  They both have the same results. <br>
 
[[ QE2013 AC-3 ECE580|Back to QE2013 AC-3 ECE580]]
 
[[ QE2013 AC-3 ECE580|Back to QE2013 AC-3 ECE580]]

Latest revision as of 11:22, 25 March 2015


QE2013_AC-3_ECE580-5

Part 1,2,3,4,5


Solution 1:

From the constraint, it can be seen that:

$ x_1 = x_3 = -x_2 $

Substitute into the objective function:

$ f(x) = x_2 (x_1 + x_3) = -2 x_2^2 $

Therefore it has a maximizer but no minimizer (f(x) goes to $ -\infty $ as $ |x_2| $ increases)

The maximizer is $ x_1 = x_2 = x_3 = 0 $. There f(x) reaches the maximum value of 0.


Solution 2:

$ f(x) = x_1 x_2 + x_2 x_3 \\ h_1(x) = x_1 + x_2 \\ h_2(x) = x_2 + x_3 \\ l(x,\lambda) = f(x) + \lambda_1 h_1(x) + \lambda_2 h_2(x) = x_1 x_2 + x_2 x_3 + \lambda_1 (x_1 + x_2) + \lambda_2 (x_2 + x_3) \\ \nabla l(x,\lambda) = \begin{bmatrix} x_2 + \lambda_1 \\ x_1 + x_3 + \lambda_1 + \lambda_3 \\ x_2 + \lambda_2 \\ x_1 + x_2 \\ x_2 + x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \\ \Rightarrow x^* = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}\ \lambda^* = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ L(x^*,\lambda^*) = F(x^*) + \lambda_1^* H_1(x^*) + \lambda_2^* H_2(x^*) = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \\ \begin{align} T(x^*) & = \{y: Dh(x^*)y = 0\} \\ & = \{y: \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} y = 0\} \\ & = \{y: y = \begin{bmatrix} 1 \\ -1 \\ 1 \end{bmatrix} a, a \in \Re \} \end{align} \\ \forall y \in T(x^*), y \ne 0: y^T L(x^*,\lambda^*) y = a \begin{bmatrix} 1 & -1 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ -1 \\ 1 \end{bmatrix} a = \begin{bmatrix} -1 & 2 & -1 \end{bmatrix} \begin{bmatrix} 1 \\ -1 \\ 1 \end{bmatrix} a^2 = -4 a^2 < 0 \\ $

$ \therefore x^* = \begin{bmatrix} 0 & 0 & 0 \end{bmatrix}^T $ is a maximizer


Comment: Solution 2 uses the formal procedure of Lagrange Multiplier approach, which is more complicated but would apply to more general cases. Solution 1 is not as general but is simpler for the given problem. They both have the same results.
Back to QE2013 AC-3 ECE580

Alumni Liaison

has a message for current ECE438 students.

Sean Hu, ECE PhD 2009