We’ll continue to use the College.train and College.test objects from the first part of this exercise. So you can consider these as already in your R environment when submitting your code.

Questions

  1. Fit a PCR model on the training set, with \(M\) chosen by cross-validation. Store the test error (MSE) for \(M\) = 10 in pcr.error (check the validation plot to see think about whether this is a good choice).

  2. Fit a PLS model on the training set, with \(M\) chosen by cross-validation. Store the test error (MSE) for \(M\) = 6 in pls.error (check the validation plot to see think about whether this is a good choice).

  3. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches? (No submission of code necessary for this part).


Assume that: