Monthly Archives: January 2018

follow-up week 2, day 2

For dimensionality reduction, we talked a bit about principal component analysis and independent component analysis, which both project your feature vectors into a new space. Indeed, those two techniques are all about finding good vectors to project onto. Such vectors are called the basis vectors of your new feature space. Essentially, you are taking your original representation of your data and finding a different representation–one that is (hopefully) better in some meaningful way.

I’ve been looking for some good tutorials on the linear algebra for all this, but haven’t found anything I really like, yet. If you find any you particularly like, please let me know(!) so I can share them with the class.

Khan Academy has a series on linear algebra. It’s lengthy, as it covers a lot of material at a very… thorough pace.

In terms of books, I used Lay’s book in undergrad, so I know it’s adequate. In general, I very much like Strang’s books (and his class is up on OpenCourseWare). Textbooks are pricey, but having a go-to linear algebra reference of some form will be worthwhile if you want to continue studying the field.

The concepts you should have a solid grasp on are basis sets, projections, eigenvalues, eigenvectors. At an absolute bare minimum, you need to be able to project vectors onto other vectors.

2018-01-20 update: Jeremy Kun’s primer hits these three topics well. The symbol “∈” simply means “is in” or “contained in”. (Thanks Joel!)