(New page: Category:ECE662Spring2012Boutin Category:blog =Lecture 14 Blog, ECE662 Spring 2012, Prof. Boutin= Thursday February 23, 2012 (Week 10) ---- Quick link to lec...)
 
Line 1: Line 1:
 
[[Category:ECE662Spring2012Boutin]]
 
[[Category:ECE662Spring2012Boutin]]
 
[[Category:blog]]
 
[[Category:blog]]
 +
[[Category:maximum likelihood estimation]]
  
 
=Lecture 14 Blog, [[ECE662]] Spring 2012, [[user:mboutin|Prof. Boutin]]=
 
=Lecture 14 Blog, [[ECE662]] Spring 2012, [[user:mboutin|Prof. Boutin]]=
Line 7: Line 8:
 
Quick link to lecture blogs: [[Lecture1ECE662S12|1]]|[[Lecture2ECE662S12|2]]|[[Lecture3ECE662S12|3]]|[[Lecture4ECE662S12|4]]|[[Lecture5ECE662S12|5]]|[[Lecture6ECE662S12|6]]|[[Lecture7ECE662S12|7]]|[[Lecture8ECE662S12|8]]| [[Lecture9ECE662S12|9]]|[[Lecture10ECE662S12|10]]|[[Lecture11ECE662S12|11]]|[[Lecture12ECE662S12|12]]|[[Lecture13ECE662S12|13]]|[[Lecture14ECE662S12|14]]|[[Lecture15ECE662S12|15]]|[[Lecture16ECE662S12|16]]|[[Lecture17ECE662S12|17]]|[[Lecture18ECE662S12|18]]|[[Lecture19ECE662S12|19]]|[[Lecture20ECE662S12|20]]|[[Lecture21ECE662S12|21]]|[[Lecture22ECE662S12|22]]|[[Lecture23ECE662S12|23]]|[[Lecture24ECE662S12|24]]|[[Lecture25ECE662S12|25]]|[[Lecture26ECE662S12|26]]|[[Lecture27ECE662S12|27]]|[[Lecture28ECE662S12|28]]|[[Lecture29ECE662S12|29]]|[[Lecture30ECE662S12|30]]
 
Quick link to lecture blogs: [[Lecture1ECE662S12|1]]|[[Lecture2ECE662S12|2]]|[[Lecture3ECE662S12|3]]|[[Lecture4ECE662S12|4]]|[[Lecture5ECE662S12|5]]|[[Lecture6ECE662S12|6]]|[[Lecture7ECE662S12|7]]|[[Lecture8ECE662S12|8]]| [[Lecture9ECE662S12|9]]|[[Lecture10ECE662S12|10]]|[[Lecture11ECE662S12|11]]|[[Lecture12ECE662S12|12]]|[[Lecture13ECE662S12|13]]|[[Lecture14ECE662S12|14]]|[[Lecture15ECE662S12|15]]|[[Lecture16ECE662S12|16]]|[[Lecture17ECE662S12|17]]|[[Lecture18ECE662S12|18]]|[[Lecture19ECE662S12|19]]|[[Lecture20ECE662S12|20]]|[[Lecture21ECE662S12|21]]|[[Lecture22ECE662S12|22]]|[[Lecture23ECE662S12|23]]|[[Lecture24ECE662S12|24]]|[[Lecture25ECE662S12|25]]|[[Lecture26ECE662S12|26]]|[[Lecture27ECE662S12|27]]|[[Lecture28ECE662S12|28]]|[[Lecture29ECE662S12|29]]|[[Lecture30ECE662S12|30]]
 
----
 
----
Today we discussed some great properties of MLE, along with the danger of misinterpreting these properties. We illustrated this using the example of "Bundle adjustment" in computer vision. The paper where the ill-conditioning of the problem is resolved, which was presented in class, can be found [https://engineering.purdue.edu/~mboutin/papers/curves.pdf|here].
+
Today we discussed a great property of MLE, along with the danger of misinterpreting this property. More specifically, we discussed the fact that, under Gaussian noise, the variance of MLE asymptotically achieves the Cramer-Rao bound. We then warned that this does not mean that MLE is the best estimator, even asymptotically, because its accuracy could be beat by an unbiased estimator. A well known case where this occurs are ill-conditioned linear systems of equations, which can be more accurately solved after projecting onto the subspace spanned by the eigenvectors corresponding to the large eigenvalues of the system. We illustrated this using the example of "Bundle adjustment" in computer vision. The paper where the ill-conditioning of the problem is resolved by variable elimination, which was presented in class, can be found [https://engineering.purdue.edu/~mboutin/papers/curves.pdf|here].
 +
 
 +
This was a somewhat untypical lectures, so your comments are welcome.  
  
 
Previous: [[Lecture13ECE662S12|Lecture 13]]
 
Previous: [[Lecture13ECE662S12|Lecture 13]]

Revision as of 11:29, 23 February 2012


Lecture 14 Blog, ECE662 Spring 2012, Prof. Boutin

Thursday February 23, 2012 (Week 10)


Quick link to lecture blogs: 1|2|3|4|5|6|7|8| 9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30


Today we discussed a great property of MLE, along with the danger of misinterpreting this property. More specifically, we discussed the fact that, under Gaussian noise, the variance of MLE asymptotically achieves the Cramer-Rao bound. We then warned that this does not mean that MLE is the best estimator, even asymptotically, because its accuracy could be beat by an unbiased estimator. A well known case where this occurs are ill-conditioned linear systems of equations, which can be more accurately solved after projecting onto the subspace spanned by the eigenvectors corresponding to the large eigenvalues of the system. We illustrated this using the example of "Bundle adjustment" in computer vision. The paper where the ill-conditioning of the problem is resolved by variable elimination, which was presented in class, can be found [1].

This was a somewhat untypical lectures, so your comments are welcome.

Previous: Lecture 13

Next: Lecture 15


Comments

Please write your comments and questions below.

  • Write a comment here
  • Write another comment here.

Back to ECE662 Spring 2012

Alumni Liaison

Meet a recent graduate heading to Sweden for a Postdoctorate.

Christine Berkesch