(New page: Category:ECE662Spring2012Boutin Category:blog =Lecture 6 Blog, ECE662 Spring 2012, Prof. Boutin= Thursday January 24, 2012 (Week 3) ---- Today we began talki...)
 
 
Line 4: Line 4:
 
=Lecture 6 Blog, [[ECE662]] Spring 2012, [[user:mboutin|Prof. Boutin]]=
 
=Lecture 6 Blog, [[ECE662]] Spring 2012, [[user:mboutin|Prof. Boutin]]=
 
Thursday January 24, 2012 (Week 3)  
 
Thursday January 24, 2012 (Week 3)  
 +
----
 +
Quick link to lecture blogs: [[Lecture1ECE662S12|1]]|[[Lecture2ECE662S12|2]]|[[Lecture3ECE662S12|3]]|[[Lecture4ECE662S12|4]]|[[Lecture5ECE662S12|5]]|[[Lecture6ECE662S12|6]]|[[Lecture7ECE662S12|7]]|[[Lecture8ECE662S12|8]]| [[Lecture9ECE662S12|9]]|[[Lecture10ECE662S12|10]]|[[Lecture11ECE662S12|11]]|[[Lecture12ECE662S12|12]]|[[Lecture13ECE662S12|13]]|[[Lecture14ECE662S12|14]]|[[Lecture15ECE662S12|15]]|[[Lecture16ECE662S12|16]]|[[Lecture17ECE662S12|17]]|[[Lecture18ECE662S12|18]]|[[Lecture19ECE662S12|19]]|[[Lecture20ECE662S12|20]]|[[Lecture21ECE662S12|21]]|[[Lecture22ECE662S12|22]]|[[Lecture23ECE662S12|23]]|[[Lecture24ECE662S12|24]]|[[Lecture25ECE662S12|25]]|[[Lecture26ECE662S12|26]]|[[Lecture27ECE662S12|27]]|[[Lecture28ECE662S12|28]]|[[Lecture29ECE662S12|29]]|[[Lecture30ECE662S12|30]]
 
----
 
----
 
Today we began talking about an important subject in decision theory: Bayes rule for normally distributed feature vectors. We proposed a simple discriminant function for this special case, and noted its geometric meaning. To better understand this geometric meaning, we first considered the special case where the class density has the identity matrix as standard deviation matrix. We noticed in that case that the value of the discriminant function is constant along circles around the mean of the class density, and that the closer the feature vector to the mean of the class density (in the usual, Euclidean sense), the larger the value of the discriminant function <math>g_i(x)</math> for that class.  
 
Today we began talking about an important subject in decision theory: Bayes rule for normally distributed feature vectors. We proposed a simple discriminant function for this special case, and noted its geometric meaning. To better understand this geometric meaning, we first considered the special case where the class density has the identity matrix as standard deviation matrix. We noticed in that case that the value of the discriminant function is constant along circles around the mean of the class density, and that the closer the feature vector to the mean of the class density (in the usual, Euclidean sense), the larger the value of the discriminant function <math>g_i(x)</math> for that class.  

Latest revision as of 11:31, 23 February 2012


Lecture 6 Blog, ECE662 Spring 2012, Prof. Boutin

Thursday January 24, 2012 (Week 3)


Quick link to lecture blogs: 1|2|3|4|5|6|7|8| 9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30


Today we began talking about an important subject in decision theory: Bayes rule for normally distributed feature vectors. We proposed a simple discriminant function for this special case, and noted its geometric meaning. To better understand this geometric meaning, we first considered the special case where the class density has the identity matrix as standard deviation matrix. We noticed in that case that the value of the discriminant function is constant along circles around the mean of the class density, and that the closer the feature vector to the mean of the class density (in the usual, Euclidean sense), the larger the value of the discriminant function $ g_i(x) $ for that class.

We also spent a lot of time discussing the first homework. Hopefully you are all beginning to think about possible questions to investigate and how you are going to attack these.

Previous: Lecture 5

Next: Lecture 7


Comments

Please write your comments and questions below.

  • Write a comment here
  • Write another comment here.

Back to ECE662 Spring 2012

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett