m
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Category:2010 Fall ECE 438 Boutin]]
 
 
----
 
----
== Solution to Q4 of Week 13 Quiz Pool ==
+
 
 +
== Solution to Q4 of Week 13 Quiz Pool ==
 +
 
 
----
 
----
  
a. y[m,n] = h[m,n] ** x[m,n]
+
a. y[m,n] = h[m,n] ** x[m,n]  
  
Using definition of convolution, <br/>
+
Using definition of convolution, <br> <math>\begin{align}
<math>
+
\begin{align}
+
 
y[m,n] &= \sum_{k=-1}^{1} \sum_{l=-1}^{1} h[k,l] x[m-k,n-l] \\
 
y[m,n] &= \sum_{k=-1}^{1} \sum_{l=-1}^{1} h[k,l] x[m-k,n-l] \\
\end{align}
+
\end{align}</math>  
</math>
+
  
Expanding, <br/>
+
Expanding, <br> y[m,n] = h[-1,-1] x[m+1,n+1] + h[-1,0] x[m+1,n] + h[-1,1] x[m+1,n-1] + h[0,-1] x[m,n+1] + h[0,0] x[m,n] + h[0,1] x[m,n-1] + h[1,-1] x[m-1,n+1] + h[1,0] x[m-1,n] + h[1,1] x[m-1,n-1]  
y[m,n] = h[-1,-1] x[m+1,n+1] + h[-1,0] x[m+1,n] + h[-1,1] x[m+1,n-1] + h[0,-1] x[m,n+1] + h[0,0] x[m,n] + h[0,1] x[m,n-1] + h[1,-1] x[m-1,n+1] + h[1,0] x[m-1,n] + h[1,1] x[m-1,n-1]  
+
  
Sub values of h[m,n] from table, zero terms go away,<br/>
+
Sub values of h[m,n] from table, zero terms go away,<br>  
  
y[m,n] = h[-1,-1] x[m+1,n+1] + h[-1,1] x[m+1,n-1] + h[0,0] x[m,n] + h[1,-1] x[m-1,n+1] + h[1,1] x[m-1,n-1]
+
y[m,n] = h[-1,-1] x[m+1,n+1] + h[-1,1] x[m+1,n-1] + h[0,0] x[m,n] + h[1,-1] x[m-1,n+1] + h[1,1] x[m-1,n-1] y[m,n] = 0.5 x[m+1,n+1] - 0.5 x[m+1,n-1] + x[m,n] - 0.5 x[m-1,n+1] + 0.5 x[m-1,n-1]
 +
 
 +
b. We can rewrite h[m,n] as
 +
 
 +
{| width="20%" cellspacing="2" cellpadding="2" border="1" class="wikitable" style="text-align: center;"
 +
|+ m
 +
|-
 +
! n
 +
! -1
 +
! 0
 +
! 1
 +
|-
 +
! -1
 +
| 0.5
 +
| 0
 +
| -0.5
 +
|-
 +
! 0
 +
| 0
 +
| 1
 +
| 0
 +
|-
 +
! 1
 +
| -0.5
 +
| 0
 +
| 0.5
 +
|}
 +
 
 +
We compute the output: <br> y[m,n] = 0.5 x[m+1,n+1] - 0.5 x[m+1,n-1] + x[m,n] - 0.5 x[m-1,n+1] + 0.5 x[m-1,n-1] <br> by considering 3X3 portions of x[m,n], where the element at m,n corresponds to 0,0 in h[m,n], so we would look at neighboring elements (if they exist) and multiply with corresponding neighbors in h[m,n] and then sum them to form y[m,n].
 +
 
 +
Example - (Indexed starting from 0) y[3,3] = 0.5 x[4,4] - 0.5 x[4,2] + x[3,3] - 0.5 x[2,4] + 0.5 x[2,2] x[2,2] = 0 x[2,4] = 1 x[3,3] = 1 x[4,2] = 1 x[4,4] = 1 so y[3,3] = 0.5 - 0.5 + 1 - 0.5 + 0 = 0.5
 +
 
 +
Similarly calculating values sequentially, results in y[m,n] - <br>
 +
 
 +
{| width="50%" cellspacing="3" cellpadding="2" border="1"
 +
|-
 +
| 0
 +
| 0
 +
| 0
 +
| 0
 +
| 0.5
 +
| 0
 +
| -0.5
 +
| 0
 +
| 0
 +
| 0
 +
| 0
 +
|-
 +
| 0
 +
| 0
 +
| 0
 +
| 0.5
 +
| 0.5
 +
| 1
 +
| -0.5
 +
| -0.5
 +
| 0
 +
| 0
 +
| 0
 +
|-
 +
| 0
 +
| 0
 +
| 0.5
 +
| 0.5
 +
| 0.5
 +
| 1
 +
| 1.5
 +
| -0.5
 +
| -0.5
 +
| 0
 +
| 0
 +
|-
 +
| 0
 +
| 0.5
 +
| 0.5
 +
| 0.5
 +
| 0.5
 +
| 1
 +
| 1.5
 +
| 1.5
 +
| -0.5
 +
| -0.5
 +
| 0
 +
|-
 +
| 0.5
 +
| 0.5
 +
| 0.5
 +
| 0.5
 +
| 1
 +
| 1
 +
| 1
 +
| 1.5
 +
| 1.5
 +
| -0.5
 +
| -0.5
 +
|-
 +
| 0.5
 +
| 1
 +
| 0.5
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1.5
 +
| 1
 +
| -0.5
 +
|-
 +
| 0
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 0
 +
|-
 +
| 0
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 0
 +
|-
 +
| 0
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 0
 +
|-
 +
| -0.5
 +
| 0.5
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1
 +
| 1.5
 +
| 0.5
 +
|-
 +
| -0.5
 +
| -0.5
 +
| 0
 +
| 0
 +
| 0
 +
| 0
 +
| 0
 +
| 0
 +
| 0
 +
| 0.5
 +
| 0.5
 +
|}
 +
 
 +
c. From the difference equation -
 
y[m,n] = 0.5 x[m+1,n+1] - 0.5 x[m+1,n-1] + x[m,n] - 0.5 x[m-1,n+1] + 0.5 x[m-1,n-1]
 
y[m,n] = 0.5 x[m+1,n+1] - 0.5 x[m+1,n-1] + x[m,n] - 0.5 x[m-1,n+1] + 0.5 x[m-1,n-1]
  
b. under construction...
+
Taking Fourier Transform on both sides, <br/>
 +
<math>
 +
\begin{align}
 +
Y(\mu,\nu) &= \frac{1}{2}X(\mu,\nu)e^{-j\mu}e^{-j\nu} - \frac{1}{2}X(\mu,\nu)e^{-j\mu}e^{j\nu} + X(\mu,\nu) - \frac{1}{2}X(\mu,\nu) e^{j\mu}e^{-j\nu} + \frac{1}{2}X(\mu,\nu)e^{j\mu}e^{j\nu} \\
  
 +
\frac{Y(\mu,\nu)}{X(\mu,\nu)} &= \frac{1}{2}e^{-j\mu}e^{-j\nu} - \frac{1}{2}e^{-j\mu}e^{j\nu} + 1 - \frac{1}{2} e^{j\mu}e^{-j\nu} + \frac{1}{2}e^{j\mu}e^{j\nu} \\
  
 +
H(\mu,\nu) &= 1 + \frac{1}{2} ( e^{-j(\mu + \nu)} + e^{j(\mu + \nu)} ) - \frac{1}{2} ( e^{-j(\mu - \nu)} + e^{j(\mu - \nu)} ) \\
 +
 +
H(\mu,\nu) &= 1 + cos(\mu + \nu) - cos(\mu - \nu)
 +
 +
\end{align}
 +
</math>
 
----
 
----
Back to [[ECE438_Week13_Quiz|Lab Week 13 Quiz Pool]]
 
  
Back to [[ECE438_Lab_Fall_2010|ECE 438 Fall 2010 Lab Wiki Page]]
+
Back to [[ECE438 Week13 Quiz|Lab Week 13 Quiz Pool]]
 +
 
 +
Back to [[ECE438 Lab Fall 2010|ECE 438 Fall 2010 Lab Wiki Page]]
 +
 
 +
Back to [[2010 Fall ECE 438 Boutin|ECE 438 Fall 2010]]  
  
Back to [[2010_Fall_ECE_438_Boutin|ECE 438 Fall 2010]]
+
[[Category:2010_Fall_ECE_438_Boutin]]

Latest revision as of 18:54, 17 November 2010


Solution to Q4 of Week 13 Quiz Pool


a. y[m,n] = h[m,n] ** x[m,n]

Using definition of convolution,
$ \begin{align} y[m,n] &= \sum_{k=-1}^{1} \sum_{l=-1}^{1} h[k,l] x[m-k,n-l] \\ \end{align} $

Expanding,
y[m,n] = h[-1,-1] x[m+1,n+1] + h[-1,0] x[m+1,n] + h[-1,1] x[m+1,n-1] + h[0,-1] x[m,n+1] + h[0,0] x[m,n] + h[0,1] x[m,n-1] + h[1,-1] x[m-1,n+1] + h[1,0] x[m-1,n] + h[1,1] x[m-1,n-1]

Sub values of h[m,n] from table, zero terms go away,

y[m,n] = h[-1,-1] x[m+1,n+1] + h[-1,1] x[m+1,n-1] + h[0,0] x[m,n] + h[1,-1] x[m-1,n+1] + h[1,1] x[m-1,n-1] y[m,n] = 0.5 x[m+1,n+1] - 0.5 x[m+1,n-1] + x[m,n] - 0.5 x[m-1,n+1] + 0.5 x[m-1,n-1]

b. We can rewrite h[m,n] as

m
n -1 0 1
-1 0.5 0 -0.5
0 0 1 0
1 -0.5 0 0.5

We compute the output:
y[m,n] = 0.5 x[m+1,n+1] - 0.5 x[m+1,n-1] + x[m,n] - 0.5 x[m-1,n+1] + 0.5 x[m-1,n-1]
by considering 3X3 portions of x[m,n], where the element at m,n corresponds to 0,0 in h[m,n], so we would look at neighboring elements (if they exist) and multiply with corresponding neighbors in h[m,n] and then sum them to form y[m,n].

Example - (Indexed starting from 0) y[3,3] = 0.5 x[4,4] - 0.5 x[4,2] + x[3,3] - 0.5 x[2,4] + 0.5 x[2,2] x[2,2] = 0 x[2,4] = 1 x[3,3] = 1 x[4,2] = 1 x[4,4] = 1 so y[3,3] = 0.5 - 0.5 + 1 - 0.5 + 0 = 0.5

Similarly calculating values sequentially, results in y[m,n] -

0 0 0 0 0.5 0 -0.5 0 0 0 0
0 0 0 0.5 0.5 1 -0.5 -0.5 0 0 0
0 0 0.5 0.5 0.5 1 1.5 -0.5 -0.5 0 0
0 0.5 0.5 0.5 0.5 1 1.5 1.5 -0.5 -0.5 0
0.5 0.5 0.5 0.5 1 1 1 1.5 1.5 -0.5 -0.5
0.5 1 0.5 1 1 1 1 1 1.5 1 -0.5
0 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 0
-0.5 0.5 1 1 1 1 1 1 1 1.5 0.5
-0.5 -0.5 0 0 0 0 0 0 0 0.5 0.5

c. From the difference equation - y[m,n] = 0.5 x[m+1,n+1] - 0.5 x[m+1,n-1] + x[m,n] - 0.5 x[m-1,n+1] + 0.5 x[m-1,n-1]

Taking Fourier Transform on both sides,
$ \begin{align} Y(\mu,\nu) &= \frac{1}{2}X(\mu,\nu)e^{-j\mu}e^{-j\nu} - \frac{1}{2}X(\mu,\nu)e^{-j\mu}e^{j\nu} + X(\mu,\nu) - \frac{1}{2}X(\mu,\nu) e^{j\mu}e^{-j\nu} + \frac{1}{2}X(\mu,\nu)e^{j\mu}e^{j\nu} \\ \frac{Y(\mu,\nu)}{X(\mu,\nu)} &= \frac{1}{2}e^{-j\mu}e^{-j\nu} - \frac{1}{2}e^{-j\mu}e^{j\nu} + 1 - \frac{1}{2} e^{j\mu}e^{-j\nu} + \frac{1}{2}e^{j\mu}e^{j\nu} \\ H(\mu,\nu) &= 1 + \frac{1}{2} ( e^{-j(\mu + \nu)} + e^{j(\mu + \nu)} ) - \frac{1}{2} ( e^{-j(\mu - \nu)} + e^{j(\mu - \nu)} ) \\ H(\mu,\nu) &= 1 + cos(\mu + \nu) - cos(\mu - \nu) \end{align} $


Back to Lab Week 13 Quiz Pool

Back to ECE 438 Fall 2010 Lab Wiki Page

Back to ECE 438 Fall 2010

Alumni Liaison

BSEE 2004, current Ph.D. student researching signal and image processing.

Landis Huffman