Line 11: Line 11:
  
 
This was a somewhat untypical lectures, so your comments are welcome.  
 
This was a somewhat untypical lectures, so your comments are welcome.  
 +
 +
==Relevant Rhea Pages==
 +
*[[Parametric_Estimators_OldKiwi|A student page about parametric density estimation, from ECE662 Spring 2008]]
  
 
Previous: [[Lecture13ECE662S12|Lecture 13]]
 
Previous: [[Lecture13ECE662S12|Lecture 13]]

Latest revision as of 12:27, 8 March 2012


Lecture 14 Blog, ECE662 Spring 2012, Prof. Boutin

Thursday February 23, 2012 (Week 7)


Quick link to lecture blogs: 1|2|3|4|5|6|7|8| 9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30


Today we discussed a great property of MLE, along with the danger of misinterpreting this property. More specifically, we discussed the fact that, under Gaussian noise, the variance of MLE asymptotically achieves the Cramer-Rao bound. We then warned that this does not mean that MLE is the best estimator, even asymptotically, because its accuracy could be beat by an unbiased estimator. A well known case where this occurs are ill-conditioned linear systems of equations, which can be more accurately solved after projecting onto the subspace spanned by the eigenvectors corresponding to the large eigenvalues of the system. We illustrated this using the example of "Bundle adjustment" in computer vision. The paper where the ill-conditioning of the problem is resolved by variable elimination, which was presented in class, can be found [1].

This was a somewhat untypical lectures, so your comments are welcome.

Relevant Rhea Pages

Previous: Lecture 13

Next: Lecture 15


Comments

Please write your comments and questions below.

  • Write a comment here
  • Write another comment here.

Back to ECE662 Spring 2012

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood