(28 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
[[Category:optimization]]
 
[[Category:optimization]]
  
=QE2013_AC-3_ECE580-1=
+
=QE2013_AC-3_ECE580-3=
  
 
:[[QE2013_AC-3_ECE580-1|Part 1]],[[QE2013_AC-3_ECE580-2|2]],[[QE2013_AC-3_ECE580-3|3]],[[QE2013_AC-3_ECE580-4|4]],[[QE2013_AC-3_ECE580-5|5]]
 
:[[QE2013_AC-3_ECE580-1|Part 1]],[[QE2013_AC-3_ECE580-2|2]],[[QE2013_AC-3_ECE580-3|3]],[[QE2013_AC-3_ECE580-4|4]],[[QE2013_AC-3_ECE580-5|5]]
  
 
'''(i)'''
 
'''(i)'''
<br> '''Solution: ''' <br>
+
<br> '''Solution 1 : ''' <br>
The conditions for a chromosome from H to be destroyed are:
+
To remove absolute values, each variable is separated into a positive component and a negative component.  For <math>x_1</math> for example:
  
1. It is chosen for crossover.  Probability = <math>\frac{1}{2}</math>.
+
<math>x_1 = x_1^+ - x_1^-, \ \ \ \ x_1^+ ,x_1^- \ge 0, \ at\ least\ one\ of\ x_1^+,\ x_1^-\ is\ 0 </math>
  
2. The crossover position is not the last symbol.  Otherwise only the last symbol can potentially change and the chromosome will still be in schema H.  Probability = <math>\frac{5}{6}</math>.
+
Then <math>|x_1| = x_1^+ + x_1^- </math>
  
3. The other chromosome to crossover has a structure such that the chromosome from H will be destroyed after crossover.  For example: if the other chromosome is * * * * * 1 *.  This probability cannot be determined from the given information, as it depends on the distribution of other chromosomes.  Therefore it has an upperbound of 1.
+
Therefore the given problem can be converted into a linear programming problem:
  
A chromosome from H is destroyed if and only if the 3 conditions above all occur. Therefore
+
<math>maximize -x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- \\
 +
subject\ to \\
 +
\begin{bmatrix}
 +
  1 & -1 & 1 & -1 & -1 & 1 \\
 +
  0 & 0 & -1& 1  & 0  & 0
 +
\end{bmatrix} \begin{bmatrix}
 +
  x_1^+  \\
 +
  x_1^- \\
 +
  x_2^+ \\
 +
  x_2^- \\
 +
  x_3^+ \\
 +
  x_3^-
 +
\end{bmatrix} = \begin{bmatrix}
 +
  2  \\
 +
  1
 +
\end{bmatrix} \\
 +
x_1^+ ,x_1^- ,x_2^+ ,x_2^- ,x_3^+ ,x_3^- \ge 0
 +
</math>
  
<math>P(destroyed)\le \frac{1}{2} \times \frac{5}{6} \times 1 = \frac{5}{12}</math>
+
This problem shares the same solution of the original problem if for i = 1,2,3, at least one of <math>x_i^+\ and\ x_i^-\ is\ 0. </math>
  
The upper bound is <math>\frac{5}{12}</math>
+
<math>-x_2^+ +x_2^- = 1 \Rightarrow x_2^+ = 0,\ x_2^- = 1</math>
  
<br>  
+
Therefore the equality constraint can be simplified as
 +
 
 +
<math>x_1^+ -x_1^- -x_3^+ +x_3^- = 3</math>
 +
 
 +
There are 4 cases to consider:
 +
 
 +
Case 1: <math>x_1^+ = x_3^+ = 0</math>
 +
 
 +
This is equivalent to finding <math>x_1^-, x_3^- \ge 0\ to\ maximize\ -x_1^- -x_3^-\ subject\ to\ -x_1^- +x_3^- = 3 </math>
 +
 
 +
Optimal solution is <math>x_1^- = 0, x_3^- = 3</math>, in this case <math>-x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4</math>
 +
 
 +
The other 3 cases can be solved similarly:
 +
 
 +
Case 2: <math>x_1^+ = x_3^- = 0</math>
 +
 
 +
No solution as <math>-x_1^- -x_3^+ = 3 </math> cannot be satisfied (left side is non-positive).
 +
 
 +
Case 3: <math>x_1^- = x_3^+ = 0</math>
 +
 
 +
Every feasible solution is optimal in this case (i.e., any <math>x_1^+, x_3^- </math> satisfying <math>x_1^+ +x_3^- = 3, x_1^+, x_3^-\ge 0</math>), <math>-x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4</math>
 +
 
 +
Case 4: <math>x_1^- = x_3^- = 0</math>
 +
 
 +
Optimal solution is <math>x_1^+ = 3, x_3^+ = 0</math>, in this case <math>-x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4</math>
 +
 
 +
 
 +
Therefore, the optimal solution is <math>x_1^- = x_3^+ = 0,\ x_2^+ = 0,\ x_2^- = 1,</math> and any <math>x_1^+, x_3^- </math> satisfying <math>x_1^+ +x_3^- = 3, x_1^+, x_3^-\ge 0</math>.  Or in terms of the original variables: <math> x_2 = -1</math>, Any <math>x_1, x_3\ satisfying\ x_1 \ge 0, x_3 \le 0, |x_1| + |x_3| = 3</math>. 
 +
 
 +
In this case the objective function is <math>-x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4</math>. 
 +
 
 +
<br> '''Solution 2 : ''' <br>
 +
 
 +
<math>x_i = x_i^+ - x_i^-, \ \ \ \ x_i^+ ,x_i^- \ge 0</math>
 +
 
 +
<math>|x_i| = x_i^+ + x_i^- </math>
 +
 
 +
Then the problem is converted into a linear programming problem:
 +
 
 +
<math>min x_1^+ x_1^- +x_2^+ +x_2^- +x_3^+ +x_3^- \\
 +
subject\ to \\
 +
\begin{bmatrix}
 +
  1 & -1 & 1 & -1 & -1 & 1 \\
 +
  0 & 0  & -1& 1  & 0  & 0
 +
\end{bmatrix} \begin{bmatrix}
 +
  x_1^+  \\
 +
  x_1^- \\
 +
  x_2^+ \\
 +
  x_2^- \\
 +
  x_3^+ \\
 +
  x_3^-
 +
\end{bmatrix} = \begin{bmatrix}
 +
  2  \\
 +
  1
 +
\end{bmatrix} \\
 +
x_1^+ ,x_1^- ,x_2^+ ,x_2^- ,x_3^+ ,x_3^- \ge 0
 +
</math>
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & b\\
 +
\hline
 +
1 &-1 & 1 &-1 &-1 & 1 & 2\\
 +
0 & 0 &-1 & 1 & 0 & 0 & 1\\
 +
1 & 1 & 1 & 1 & 1 & 1 & 0\\
 +
\end{array} </math>
 +
 
 +
bring an artificial variable <math>x_4</math>
 +
 
 +
phase 1:
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & x_4 & b\\
 +
\hline
 +
1 &-1 & 1 &-1 &-1 & 1 & 0 & 2\\
 +
0 & 0 &-1 & 1 & 0 & 0 & 1 & 1\\
 +
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
 +
\end{array} </math>
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & x_4 & b\\
 +
\hline
 +
1 &-1 & 1 &-1 &-1 & 1 & 0 & 2\\
 +
0 & 0 &-1 & 1 & 0 & 0 & 1 & 1\\
 +
0 & 0 & 1 &-1 & 0 & 0 & 0 &-1\\
 +
\end{array} </math>
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & x_4 & b\\
 +
\hline
 +
1 &-1 & 0 & 0 &-1 & 1 & 1 & 3\\
 +
0 & 0 &-1 & 1 & 0 & 0 & 1 & 1\\
 +
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
 +
\end{array} </math>
 +
<br>
 +
 
 +
phase 2:
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & b\\
 +
\hline
 +
1 &-1 & 0 & 0 &-1 & 1 & 3\\
 +
0 & 0 &-1 & 1 & 0 & 0 & 1\\
 +
1 & 1 & 1 & 1 & 1 & 1 & 0\\
 +
\end{array} </math>
 +
 
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & b\\
 +
\hline
 +
1 &-1 & 0 & 0 &-1 & 1 & 3\\
 +
0 & 0 &-1 & 1 & 0 & 0 & 1\\
 +
0 & 2 & 2 & 0 & 2 & 0 &-4\\
 +
\end{array} </math>
 +
 
 +
optimal solution: <math>x^* = [3, -1, 0]</math>,  cost = 4
  
 
'''(ii)'''
 
'''(ii)'''
<br> '''Solution: '''
+
<br> '''Solution 1: ''' <br>
  
      Final Range: 1.0; Initial Range: 20.
+
The objective is the same as minimizing <math>x_1^+ +x_1^- +x_2^+ +x_2^- +x_3^+ +x_3^-</math>
  
      <math> \frac{2}{F_{N+1}} \le \frac{1.0}{20}</math>, or <math> F_{N+1} \ge 40</math>
+
Therefore using the Asymmetric Form of Duality, the dual problem is:
  
      So, <span class="texhtml">''N'' + 1 = 9</span>
+
<math>maximize\ \lambda^T \begin{bmatrix}
 +
  2 \\
 +
  1
 +
\end{bmatrix} \\
 +
subject\ to \\
 +
\lambda^T \begin{bmatrix}
 +
  1 & -1 & 1 & -1 & -1 & 1 \\
 +
  0 & 0  & -1& 1  & 0  & 0
 +
\end{bmatrix} \le \begin{bmatrix}
 +
  1 \\
 +
  1 \\
 +
  1 \\
 +
  1 \\
 +
  1 \\
 +
  1
 +
\end{bmatrix} </math>
  
      Therefore, the minimal iterations is N-1 or 7.
+
After simplifying, this becomes:
  
<br>  
+
<math>maximize\ 2 \lambda_1 + \lambda_2 \\
 +
subject\ to \\
 +
|\lambda_1| \le 1 \\
 +
|\lambda_1 - \lambda_2| \le 1</math>
  
'''Solution 2:'''
+
Therefore the optimal solution is <math>\lambda_1 = 1, \lambda_2 = 2 </math>.  In this case <math>2 \lambda_1 + \lambda_2 = 4</math>.
  
Since the final range is <math> 1.0 </math> and the initial range is <math> 20 </math>, we can say
+
<br> '''Solution 2: ''' <br>
<math> \frac{2}{F_{N+1}} \le \frac{1.0}{20}  or equivalently F_{N+1}} \ge 40 </math>
+
From the inequality, we get <math> F_{N+1} \ge 40 , so N+1=9 </math>. Therefore the minimum number of iteration is N-1=7
+
  
 +
<math>max\ 2 \lambda_1 + \lambda_2 \Rightarrow min\ -2 \lambda_1 - \lambda_2 \\
 +
subject\ to \\
 +
\lambda^T \begin{bmatrix}
 +
  1 & -1 & 1 & -1 & -1 & 1 \\
 +
  0 & 0  & -1& 1  & 0  & 0
 +
\end{bmatrix} \le \begin{bmatrix}
 +
  1 \\
 +
  1 \\
 +
  1 \\
 +
  1 \\
 +
  1 \\
 +
  1
 +
\end{bmatrix} </math>
 +
 +
Introduce artificial variables and change the constraints to:
  
----
 
----
 
<font face="serif"></font><math>\color{blue}\text{Related Problem: }</math>
 
 
<math>
 
<math>
\text{Find the final uncertainty range using the Fibonacci method after 6 iterations }
+
\lambda_1 + \lambda_3 = 1 \\
</math>
+
-\lambda_1 + \lambda_4 = 1 \\
<math>
+
\lambda_1 - \lambda_2 + \lambda_5 = 1 \\
\text{Assume the last step has the form: }
+
-\lambda_1 + \lambda_2 + \lambda_6 = 1 \\
1-\rho_{N-1}=\frac{F_2}{F_3}=\frac{2}{3}
+
-\lambda_1 + \lambda_7 = 1 \\
\text{The initial range is 10}
+
\lambda_1 + \lambda_8 = 1 \\
 
</math>
 
</math>
  
  
'''Solution:'''
+
<math>\begin{array}{|c|c|c|c|c|c|c|c|c|} \lambda_1 & \lambda_2 & \lambda_3 & \lambda_4 & \lambda_5 & \lambda_6 & \lambda_7 & \lambda_8 & c\\
<math>
+
\hline
\text{The reduction factor is } \frac{F_{2}}{F_{7+1}} =\frac{1}{34}
+
1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\
\text{, So the final range is }
+
-1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1\\
10 \frac{F_{2}}{F_{7+1}}= \frac{5}{17}
+
1 &-1 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\
</math>
+
-1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1\\
 +
-1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1\\
 +
1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\
 +
-2 &-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
 +
\end{array} </math>
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|c|c|} \lambda_1 & \lambda_2 & \lambda_3 & \lambda_4 & \lambda_5 & \lambda_6 & \lambda_7 & \lambda_8 & c\\
 +
\hline
 +
1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\
 +
0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 2\\
 +
0 &-1 &-1 & 0 & 1 & 0 & 0 & 0 & 0\\
 +
0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 2\\
 +
0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 2\\
 +
0 & 0 &-1 & 0 & 0 & 0 & 0 & 1 & 0\\
 +
0 &-1 & 2 & 0 & 0 & 0 & 0 & 0 & 2\\
 +
\end{array} </math>
 +
 
 +
<math>\begin{array}{|c|c|c|c|c|c|c|c|c|} \lambda_1 & \lambda_2 & \lambda_3 & \lambda_4 & \lambda_5 & \lambda_6 & \lambda_7 & \lambda_8 & c\\
 +
\hline
 +
1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\
 +
0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 2\\
 +
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2\\
 +
0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 2\\
 +
0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 2\\
 +
0 & 0 &-1 & 0 & 0 & 0 & 0 & 1 & 0\\
 +
0 & 0 & 3 & 0 & 0 & 1 & 0 & 0 & 4\\
 +
\end{array} </math>
  
 +
optimal solution: <math>\lambda^* = \begin{bmatrix}
 +
  1 \\
 +
  2
 +
\end{bmatrix}</math>,  cost = 4
  
 +
<br> '''Comment: ''' Solution 2 uses the formal procedure of Simplex Method, which is more complicated but would apply to more general cases.  Solution 1 is not as general but is simpler for the given problem.  They both have the same results. <br>
 
[[ QE2013 AC-3 ECE580|Back to QE2013 AC-3 ECE580]]
 
[[ QE2013 AC-3 ECE580|Back to QE2013 AC-3 ECE580]]

Latest revision as of 11:19, 25 March 2015


QE2013_AC-3_ECE580-3

Part 1,2,3,4,5

(i)
Solution 1 :
To remove absolute values, each variable is separated into a positive component and a negative component. For $ x_1 $ for example:

$ x_1 = x_1^+ - x_1^-, \ \ \ \ x_1^+ ,x_1^- \ge 0, \ at\ least\ one\ of\ x_1^+,\ x_1^-\ is\ 0 $

Then $ |x_1| = x_1^+ + x_1^- $

Therefore the given problem can be converted into a linear programming problem:

$ maximize -x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- \\ subject\ to \\ \begin{bmatrix} 1 & -1 & 1 & -1 & -1 & 1 \\ 0 & 0 & -1& 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} x_1^+ \\ x_1^- \\ x_2^+ \\ x_2^- \\ x_3^+ \\ x_3^- \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \end{bmatrix} \\ x_1^+ ,x_1^- ,x_2^+ ,x_2^- ,x_3^+ ,x_3^- \ge 0 $

This problem shares the same solution of the original problem if for i = 1,2,3, at least one of $ x_i^+\ and\ x_i^-\ is\ 0. $

$ -x_2^+ +x_2^- = 1 \Rightarrow x_2^+ = 0,\ x_2^- = 1 $

Therefore the equality constraint can be simplified as

$ x_1^+ -x_1^- -x_3^+ +x_3^- = 3 $

There are 4 cases to consider:

Case 1: $ x_1^+ = x_3^+ = 0 $

This is equivalent to finding $ x_1^-, x_3^- \ge 0\ to\ maximize\ -x_1^- -x_3^-\ subject\ to\ -x_1^- +x_3^- = 3 $

Optimal solution is $ x_1^- = 0, x_3^- = 3 $, in this case $ -x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4 $

The other 3 cases can be solved similarly:

Case 2: $ x_1^+ = x_3^- = 0 $

No solution as $ -x_1^- -x_3^+ = 3 $ cannot be satisfied (left side is non-positive).

Case 3: $ x_1^- = x_3^+ = 0 $

Every feasible solution is optimal in this case (i.e., any $ x_1^+, x_3^- $ satisfying $ x_1^+ +x_3^- = 3, x_1^+, x_3^-\ge 0 $), $ -x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4 $

Case 4: $ x_1^- = x_3^- = 0 $

Optimal solution is $ x_1^+ = 3, x_3^+ = 0 $, in this case $ -x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4 $


Therefore, the optimal solution is $ x_1^- = x_3^+ = 0,\ x_2^+ = 0,\ x_2^- = 1, $ and any $ x_1^+, x_3^- $ satisfying $ x_1^+ +x_3^- = 3, x_1^+, x_3^-\ge 0 $. Or in terms of the original variables: $ x_2 = -1 $, Any $ x_1, x_3\ satisfying\ x_1 \ge 0, x_3 \le 0, |x_1| + |x_3| = 3 $.

In this case the objective function is $ -x_1^+ -x_1^- -x_2^+ -x_2^- -x_3^+ -x_3^- = -4 $.


Solution 2 :

$ x_i = x_i^+ - x_i^-, \ \ \ \ x_i^+ ,x_i^- \ge 0 $

$ |x_i| = x_i^+ + x_i^- $

Then the problem is converted into a linear programming problem:

$ min x_1^+ x_1^- +x_2^+ +x_2^- +x_3^+ +x_3^- \\ subject\ to \\ \begin{bmatrix} 1 & -1 & 1 & -1 & -1 & 1 \\ 0 & 0 & -1& 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} x_1^+ \\ x_1^- \\ x_2^+ \\ x_2^- \\ x_3^+ \\ x_3^- \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \end{bmatrix} \\ x_1^+ ,x_1^- ,x_2^+ ,x_2^- ,x_3^+ ,x_3^- \ge 0 $

$ \begin{array}{|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & b\\ \hline 1 &-1 & 1 &-1 &-1 & 1 & 2\\ 0 & 0 &-1 & 1 & 0 & 0 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 0\\ \end{array} $

bring an artificial variable $ x_4 $

phase 1:

$ \begin{array}{|c|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & x_4 & b\\ \hline 1 &-1 & 1 &-1 &-1 & 1 & 0 & 2\\ 0 & 0 &-1 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} $

$ \begin{array}{|c|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & x_4 & b\\ \hline 1 &-1 & 1 &-1 &-1 & 1 & 0 & 2\\ 0 & 0 &-1 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 &-1 & 0 & 0 & 0 &-1\\ \end{array} $

$ \begin{array}{|c|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & x_4 & b\\ \hline 1 &-1 & 0 & 0 &-1 & 1 & 1 & 3\\ 0 & 0 &-1 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} $

phase 2:

$ \begin{array}{|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & b\\ \hline 1 &-1 & 0 & 0 &-1 & 1 & 3\\ 0 & 0 &-1 & 1 & 0 & 0 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 0\\ \end{array} $


$ \begin{array}{|c|c|c|c|c|c|c|} x_1^+ & x_1^- & x_2^+ & x_2^- & x_3^+ & x_3^- & b\\ \hline 1 &-1 & 0 & 0 &-1 & 1 & 3\\ 0 & 0 &-1 & 1 & 0 & 0 & 1\\ 0 & 2 & 2 & 0 & 2 & 0 &-4\\ \end{array} $

optimal solution: $ x^* = [3, -1, 0] $, cost = 4

(ii)
Solution 1:

The objective is the same as minimizing $ x_1^+ +x_1^- +x_2^+ +x_2^- +x_3^+ +x_3^- $

Therefore using the Asymmetric Form of Duality, the dual problem is:

$ maximize\ \lambda^T \begin{bmatrix} 2 \\ 1 \end{bmatrix} \\ subject\ to \\ \lambda^T \begin{bmatrix} 1 & -1 & 1 & -1 & -1 & 1 \\ 0 & 0 & -1& 1 & 0 & 0 \end{bmatrix} \le \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} $

After simplifying, this becomes:

$ maximize\ 2 \lambda_1 + \lambda_2 \\ subject\ to \\ |\lambda_1| \le 1 \\ |\lambda_1 - \lambda_2| \le 1 $

Therefore the optimal solution is $ \lambda_1 = 1, \lambda_2 = 2 $. In this case $ 2 \lambda_1 + \lambda_2 = 4 $.


Solution 2:

$ max\ 2 \lambda_1 + \lambda_2 \Rightarrow min\ -2 \lambda_1 - \lambda_2 \\ subject\ to \\ \lambda^T \begin{bmatrix} 1 & -1 & 1 & -1 & -1 & 1 \\ 0 & 0 & -1& 1 & 0 & 0 \end{bmatrix} \le \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} $

Introduce artificial variables and change the constraints to:

$ \lambda_1 + \lambda_3 = 1 \\ -\lambda_1 + \lambda_4 = 1 \\ \lambda_1 - \lambda_2 + \lambda_5 = 1 \\ -\lambda_1 + \lambda_2 + \lambda_6 = 1 \\ -\lambda_1 + \lambda_7 = 1 \\ \lambda_1 + \lambda_8 = 1 \\ $


$ \begin{array}{|c|c|c|c|c|c|c|c|c|} \lambda_1 & \lambda_2 & \lambda_3 & \lambda_4 & \lambda_5 & \lambda_6 & \lambda_7 & \lambda_8 & c\\ \hline 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\ -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1\\ 1 &-1 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\ -1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1\\ -1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ -2 &-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \end{array} $

$ \begin{array}{|c|c|c|c|c|c|c|c|c|} \lambda_1 & \lambda_2 & \lambda_3 & \lambda_4 & \lambda_5 & \lambda_6 & \lambda_7 & \lambda_8 & c\\ \hline 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 2\\ 0 &-1 &-1 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 2\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 2\\ 0 & 0 &-1 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 &-1 & 2 & 0 & 0 & 0 & 0 & 0 & 2\\ \end{array} $

$ \begin{array}{|c|c|c|c|c|c|c|c|c|} \lambda_1 & \lambda_2 & \lambda_3 & \lambda_4 & \lambda_5 & \lambda_6 & \lambda_7 & \lambda_8 & c\\ \hline 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 2\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 2\\ 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 2\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 2\\ 0 & 0 &-1 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 3 & 0 & 0 & 1 & 0 & 0 & 4\\ \end{array} $

optimal solution: $ \lambda^* = \begin{bmatrix} 1 \\ 2 \end{bmatrix} $, cost = 4


Comment: Solution 2 uses the formal procedure of Simplex Method, which is more complicated but would apply to more general cases. Solution 1 is not as general but is simpler for the given problem. They both have the same results.
Back to QE2013 AC-3 ECE580

Alumni Liaison

Meet a recent graduate heading to Sweden for a Postdoctorate.

Christine Berkesch