3. Second View of PCA
Next, I will first use Lego toys to make an analogy for PCA, and then use the Lego example to introduce another interpretation of PCA.
This is the power of feature extraction, and it is especially crucial for computers. For instance, when a computer is tasked with recognizing the content of a photo, using extracted features is far more efficient than relying on raw pixel values. Next, let’s reexamine PCA from the perspective of LEGO.
As a LEGO toy designer, my task is to find a series of LEGO blocks (the “giant blocks”) and, through an “analysis” of the animal’s shape, calculate the required number of each block. This ensures that the assembled animal can roughly capture the important features of the animal’s form.
Through this example, we can easily understand the second formulation of PCA. The PCA problem can be seen as finding a series of weight vectors \(\textbf{w}_j\) to extract features, and then using these weight vectors and the extracted features to reconstruct the image such that the difference between the reconstructed image and the original image is within an acceptable range. This concludes our lecture for today. I hope you have a general understanding of the second interpretation of PCA. In the next lecture, we will discuss it with more precise language and terminology.
This is another interpretation of PCA. Essentially, we have now built the bridge between the first part of the course and deep learning based on neural network models. Let’s head to the lab for some hands-on practice, and we will continue next week.