SDSC5001 - Assignment 2
#assignment #sdsc5001
题目链接SDSC5001 - Question of Assignment 2
Question 1
When the number of features p is large, there tends to be a deterioration in the performance of KNN and other local approaches that perform prediction using only observations that are near the test observation for which a prediction must be made. This phenomenon is known as the curse of dimensionality, and it ties into the fact that non-parametric approaches often perform poorly when p is large. We will now investigate this curse.
(a) Suppose that we have a set of observations, each with measurements on p=1 feature, X. We assume that X is uniformly (evenly) distributed on [0, 1]. Associated with each observation is a response value. Suppose that we wish to predict a test observation’s response using only observations that are within 10% of the range of X closest to that test observation. For instance, in order to predict the response for a test observation with X=0.6, we will use observations in the range [0.55, 0.65]. On average, what fraction of the available observations will we use to make the prediction?
(b) Now suppose that we have a set of observations, each with measurements on p=2 features, X1 and X2. We assume that (X1, X2) are uniformly distributed on [0,1] x [0,1]. We wish to predict a test observation’s response using only observations that are within 10% of the range of X1 and within 10% of the range of X2 closest to that test observation. For instance, in order to predict the response for a test observation with X1=0.6 and X2=0.35, we will use observations in the range [0.55, 0.65] for X1 and in the range [0.3, 0.4] for X2. On average, what fraction of the available observations will we use to make the prediction?
© Now suppose that we have a set of observations on p=100 features. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1. We wish to predict a test observation’s response using observations within the 10% of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction?
(d) Using your answers to parts (a)-©, argue that a drawback of KNN when p is large is that there are very few training observations “near” any given test observation.
Solution to Question 1
(a) For p=1, the feature X is uniformly distributed on [0,1]. The range of X is 1, and we are considering a neighborhood that is 10% of this range, so the interval length is 0.1. Since the distribution is uniform, the fraction of observations used is exactly the proportion of the interval length to the total range, which is 0.1 or 10%.
(b) For p=2, the features (X1, X2) are uniformly distributed on the unit square [0,1]x[0,1]. The neighborhood is a rectangle with sides of length 0.1 in each dimension (10% of the range for each feature). The area of this rectangle is 0.1 * 0.1 = 0.01. Since the distribution is uniform, the fraction of observations used is the area of the rectangle, which is 0.01 or 1%.
© For p=100, the features are uniformly distributed on a 100-dimensional hypercube. The neighborhood is a hypercube with each side of length 0.1. The volume of this hypercube is . Thus, the fraction of observations used is extremely small, approximately .
(d) From parts (a) to ©, we see that as the number of features p increases, the fraction of observations near a test observation decreases dramatically. For p=1, it’s 10%; for p=2, it’s 1%; and for p=100, it’s virtually zero. This means that in high dimensions, there are very few training points within the local neighborhood of any test point. Consequently, KNN and other local methods suffer because they rely on having sufficient nearby points to make accurate predictions. This is the curse of dimensionality: as p grows, the data becomes sparse, and local approximations become unreliable.
Question 2
Answer the following questions about the differences between LDA and QDA.
(a) If the Bayes decision boundary is linear, do we expect LDA or QDA to perform better on the training set? On the test set?
(b) If the Bayes decision boundary is non-linear, do we expect LDA or QDA to perform better on the training set? On the test set?
© In general, as the sample size n increases, do we expect the test prediction accuracy of QDA relative to LDA to improve, decline, or be unchanged? Why?
(d) True or False: Even if the Bayes decision boundary for a given problem is linear, we will probably achieve a superior test error rate using QDA rather than LDA because QDA is flexible enough to model a linear decision boundary. Justify your answer.
Solution to Question 2
(a)
-
Training Set: We expect QDA to perform better on the training set. QDA is a more flexible model (it has more parameters) than LDA, so it can fit the training data more closely, leading to a lower training error rate.
-
Test Set: We expect LDA to perform better on the test set. Since the true (Bayes) boundary is linear, the extra flexibility of QDA is unnecessary. LDA, by making the correct assumption of a common covariance matrix, will have lower variance. Using QDA in this case would likely lead to overfitting (higher variance without a reduction in bias), resulting in a higher test error.
(b)
-
Training Set: We expect QDA to perform better on the training set. For the same reason as in (a), its higher flexibility allows it to achieve a better fit to the training data.
-
Test Set: We expect QDA to perform better on the test set. Because the true boundary is non-linear, QDA’s flexibility to model different class covariances allows it to better approximate the true boundary. LDA, constrained to a linear boundary, will suffer from higher bias.
© As the sample size n increases, we expect the test prediction accuracy of QDA relative to LDA to improve.
-
Reason: QDA requires estimating a separate covariance matrix for each class, which involves more parameters than LDA (which estimates a single pooled covariance matrix). With a small sample size, the variance introduced by estimating these additional parameters in QDA can hurt its performance. However, as the sample size grows larger, these parameters can be estimated more accurately. The benefit of QDA’s flexibility (potentially lower bias) is realized with less risk of overfitting (high variance), so its performance relative to the simpler LDA model improves.
(d) False.
-
Justification: While QDA is flexible enough to model a linear decision boundary (it is a more general model), it is not likely to achieve a superior test error rate when the true boundary is linear. Because QDA has more parameters to estimate, it has higher variance than LDA. If the true boundary is linear, LDA correctly assumes this simpler structure. Using QDA would introduce unnecessary variance without reducing bias, likely leading to a higher test error due to overfitting. Therefore, LDA is generally preferred when we have reason to believe the decision boundary is linear.
Question 3
Suppose we collect data for a group of students in a statistics class with variables hours studied, undergrad GPA, and receive an A. We fit a logistic regression and produce estimated coefficient, .
(a) Estimate the probability that a student who studies for 40h and has an undergrad GPA of 3.5 gets an A in the class.
(b) How many hours would the student in part (a) need to study to have a 50% chance of getting an A in the class?
Solution to Question 3
(a) Estimate the probability
The logistic regression model is:
Given: , , , , .
-
Calculate the linear predictor (z):
-
Calculate the probability:
Answer: The estimated probability is approximately 0.378 (or 37.8%).
(b) Find hours for a 50% chance
A 50% chance means . In the logistic model, this happens precisely when the linear predictor .
-
Set up the equation:
We have . We know and . We need to solve for (hours studied). -
Solve for :
Answer: The student would need to study for 50 hours to have a 50% chance of getting an A.
Question 4
Let’s develop a model to predict whether a given car gets high or low gas mileage based on the Auto data set in the ISLP package.
(a) Create a binary variable, mpg01, that contains a 1 if mpg contains a value above its median, and a 0 if mpg contains a value below its median.
(b) Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the other features seem most likely to be useful in predicting mpg01? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.
© Split the data into a training set and a test set.
(d) Perform LDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
(e) Perform QDA on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
(f) Perform logistic regression on the training data in order to predict mpg01 using the variables that seemed most associated with mpg01 in (b). What is the test error of the model obtained?
(g) Perform KNN on the training data, with several values of K, in order to predict mpg01. Use only the variables that seemed most associated with mpg01 in (b). What test errors do you obtain? Which value of K seems to perform the best on this data set?
Solution to Question 4
(a) Create binary variable
1 | # Load the library and data |
(b) Exploratory Data Analysis (EDA)
-
Method: Create boxplots of each predictor against
mpg01, and scatterplots of pairs of predictors colored bympg01. -
Key Findings: Features strongly associated with
mppg01are typically:weight: Heavier cars generally have lower mpg.horsepower: More powerful cars generally have lower mpg.displacement: Larger engine size generally indicates lower mpg.acceleration: Cars with slower acceleration (higher acceleration time, meaning less powerful) may have higher mpg, but the relationship might be less strong than the first three.
-
Conclusion:
weight,horsepower, anddisplacementare the most promising predictors.
© Split the data
1 | # Set seed for reproducibility |
(d) LDA
1 | # Fit LDA model using selected features (e.g., weight, horsepower) |
(e) QDA
1 | # Fit QDA model |
(f) Logistic Regression
1 | # Fit logistic regression model |
(g) KNN
1 | # Prepare data: Standardize features and create matrices |
Summary of Findings:
-
The best model is often Logistic Regression or LDA, with test errors around 10%.
-
QDA might perform similarly or slightly worse.
-
KNN performance depends heavily on K. A moderate K (e.g., 5, 10) usually performs best, potentially achieving error rates similar to the linear models.
Question 5
We perform best subset, forward stepwise, and backward stepwise selection on a single data set. For each approach, we obtain models, containing predictors. Answer the following questions:
(a) Which of the three models with k predictors has the smallest training RSS?
(b) Which of the three models with k predictors has the smallest test MSE?
© True or False for each statement below.
i. The predictors in the k-variable model identified by forward stepwise are a subset of the predictors in the (k+1)-variable model identified by forward stepwise selection.
ii. The predictors in the k-variable model identified by backward stepwise are a subset of the predictors in the (k+1)-variable model identified by backward stepwise selection.
iii. The predictors in the k-variable model identified by backward stepwise are a subset of the predictors in the (k+1)-variable model identified by forward stepwise selection.
iv. The predictors in the k-variable model identified by forward stepwise are a subset of the predictors in the (k+1)-variable model identified by backward stepwise selection.
v. The predictors in the k-variable model identified by best subset are a subset of the predictors in the (k+1)-variable model identified by best subset selection.
Solution to Question 5
(a) Training RSS
The best subset selection model with k predictors will have the smallest training RSS.
-
Reason: Best subset selection searches through all possible combinations of k predictors and chooses the best one. Forward and backward stepwise are greedy approximations that do not guarantee the absolute best model of size k.
(b) Test MSE
There is no definitive answer; it depends on the specific dataset and the true relationship between the predictors and the response.
-
Reason: The model with the smallest test MSE is the one that best balances bias and variance. While best subset has the lowest training RSS (and thus lowest bias), it might overfit the training data (high variance), leading to a higher test MSE than a more constrained stepwise approach in some cases. The relative performance is unpredictable and must be validated on test data.
© True or False
i. True.
-
Reason: Forward stepwise selection starts with no predictors and adds one predictor at a time. The model with k predictors is built directly from the model with k-1 predictors by adding the next best predictor. Therefore, the set of predictors in the k-variable model is always a subset of the predictors in the (k+1)-variable model. The model path is “nested.”
ii. True.
-
Reason: Backward stepwise selection starts with all p predictors and removes one predictor at a time. The model with k+1 predictors is built directly from the model with k predictors by removing the least useful predictor. Therefore, the predictors in the k-variable model are a subset of the predictors in the (k+1)-variable model. This path is also “nested.”
iii. False.
-
Reason: There is no guaranteed relationship between the subsets of predictors chosen by backward stepwise for a given k and the subsets chosen by forward stepwise for k+1. The two algorithms follow different search paths and can produce very different models.
iv. False.
-
Reason: Similarly, there is no guaranteed subset relationship between the models found by forward and backward stepwise selection. The k-variable model from forward stepwise is not necessarily a subset of the (k+1)-variable model from backward stepwise.
v. False.
-
Reason: Best subset selection independently finds the best model for each possible model size. The optimal set of k predictors is not necessarily a subset of the optimal set of k+1 predictors. The algorithm is free to choose a completely different combination of predictors for each model size.
Question 6
Choose the correct answer for each question below.
(a) The lasso, relative to least squares, is:
i. More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
iv. Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
(b) Ridge regression, relative to least squares, is:
i. More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
iv. Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
© Nonlinear methods, relative to least squares, is:
i. More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
iv. Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
Solution to Question 6
(a) Correct Answer: iii
-
Reasoning: Lasso regression uses L1 regularization (shrinking coefficients, some to exactly zero), which makes it less flexible than least squares. A less flexible model has higher bias but lower variance. The trade-off is beneficial for prediction accuracy only if the increase in bias is small relative to the decrease in variance.
(b) Correct Answer: iii
-
Reasoning: Ridge regression uses L2 regularization (shrinking coefficients towards zero but not exactly zero), which also makes it less flexible than least squares. Similar to the lasso, it improves prediction accuracy when the increase in bias is less than the decrease in variance.
© Correct Answer: ii
-
Reasoning: Nonlinear methods (e.g., polynomial regression, splines, decision trees) are more flexible than the linear least squares model. A more flexible model has lower bias but higher variance. The trade-off is beneficial for prediction accuracy only if the increase in variance is less than the decrease in bias.
Question 7
Suppose we estimate the regression coefficients in a linear regression model by minimizing
for a particular value of s. Choose the correct answer for each question below.
(a) As we increase s from 0, the training RSS will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
(b) As we increase s from 0, the test MSE will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
© As we increase s from 0, the variance will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
(d) As we increase s from 0, the squared(bias) will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
Solution to Question 7
This problem describes the Lasso regression method, where s is a bound on the L1-norm of the coefficients. As s increases, the model becomes less constrained, moving from a null model towards the full least squares solution.
(a) Correct Answer: iv. Steadily decrease.
-
Reasoning: The Training RSS is minimized when the model fits the data best. When
s=0, all coefficients are forced to be zero, resulting in a high RSS. Assincreases, the constraint is relaxed, allowing the coefficients to take on values that better fit the training data. Therefore, the training RSS will decrease monotonically (steadily) as the model flexibility increases, reaching a minimum at the least squares solution (whensis large enough).
(b) Correct Answer: ii. Decrease initially, and then eventually start increasing in a U shape.
-
Reasoning: Test MSE captures the prediction error on new data and is subject to the bias-variance trade-off. For very small
s(strong constraint), the model has high bias (underfitting), leading to high test MSE. Assincreases to an optimal value, variance increases slightly, but bias decreases significantly, leading to a decrease in test MSE. Beyond this optimal point, further increasingsleads to overfitting (variance increases dramatically with little reduction in bias), causing the test MSE to rise again. This creates a characteristic U-shape.
© Correct Answer: iii. Steadily increase.
-
Reasoning: Variance measures the model’s sensitivity to the training data. A highly constrained model (small
s) has low variance. As the constraint is relaxed (increasings), the model has more freedom to fit the specific nuances of the training set, making its estimates more variable. Therefore, variance increases steadily assincreases.
(d) Correct Answer: iv. Steadily decrease.
-
Reasoning: (Squared) Bias measures the error introduced by the model’s inability to represent the true relationship. A very constrained model (small
s) is simplistic and may have high bias. Assincreases, the model becomes more flexible and can better approximate the underlying true function, leading to a steady decrease in bias.
Question 8
Suppose we estimate the regression coefficients in a linear regression model by minimizing
for a particular value of . Choose the correct answer for each question below.
(a) As we increase from 0, the training RSS will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
(b) As we increase from 0, the test MSE will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
© As we increase from 0, the variance will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
(d) As we increase from 0, the (squared) bias will:
i. Increase initially, and then eventually start decreasing in an inverted U shape.
ii. Decrease initially, and then eventually start increasing in a U shape.
iii. Steadily increase.
iv. Steadily decrease.
v. Remain constant.
Solution to Question 8
This problem describes Ridge regression, where controls the strength of L2 regularization. As increases, the model becomes more constrained, moving from the full least squares solution toward a null model.
(a) Correct Answer: iii. Steadily increase.
-
Reasoning: The Training RSS measures how well the model fits the training data. When , we have the ordinary least squares solution, which minimizes the RSS. As increases, the regularization term forces the coefficients to shrink toward zero, making the model less flexible and reducing its ability to fit the training data perfectly. Therefore, the training RSS will increase steadily as increases.
(b) Correct Answer: ii. Decrease initially, and then eventually start increasing in a U shape.
-
Reasoning: Test MSE is subject to the bias-variance trade-off. When (no regularization), the model may overfit (high variance), leading to high test MSE. As increases to an optimal value, the reduction in variance outweighs the increase in bias, causing test MSE to decrease. Beyond this optimal point, the model becomes too constrained (high bias, underfitting), and test MSE increases again. This creates a U-shaped curve.
© Correct Answer: iv. Steadily decrease.
-
Reasoning: Variance measures the model’s sensitivity to fluctuations in the training data. A complex model () has high variance. As increases, the regularization constrains the coefficients, making the model more stable and less sensitive to the specific training sample. Therefore, variance decreases steadily as increases.
(d) Correct Answer: iii. Steadily increase.
-
Reasoning: (Squared) Bias measures the error from approximating a complex real-world phenomenon with a simpler model. When , the model is very flexible and has low bias. As increases, the model becomes more constrained and less able to capture the true underlying relationship in the data. Therefore, bias increases steadily as increases.
Question 9
We will predict the number of applications received in the College data set in the ISLP package.
(a) Split the data set into a training set and a test set.
(b) Fit a linear model using least squares on the training set, and report the test error obtained.
© Fit a ridge regression model on the training set, with chosen by cross validation. Report the test error obtained.
(d) Fit a lasso model on the training set, with chosen by cross validation. Report the test error obtained, along with the number of non-zero coefficient estimates.
(e) Fit a PCR model on the training set, with M (the number of principal components) chosen by cross validation. Report the test error obtained, along with the number of PCs selected by cross validation.
Solution to Question 9
Note: This is a practical coding exercise. The exact results will depend on the random seed used for splitting the data. Below is a typical approach and expected outcomes.
(a) Data Splitting
1 | # Load required libraries and data |
(b) Linear Regression (Least Squares)
1 | # Fit linear model |
© Ridge Regression
1 | library(glmnet) |
(d) Lasso Regression
1 | # Fit lasso with cross-validation |
(e) Principal Component Regression (PCR)
1 | library(pls) |
Expected Results Summary:
-
Linear Regression: Highest test error due to potential overfitting
-
Ridge Regression: 5-10% improvement over linear regression
-
Lasso Regression: Similar performance to ridge, with variable selection
-
PCR: Performance depends on whether the important predictors align with the first few principal components
Question 10
Suppose we fit a curve with basis functions . We fit the linear regression model
and obtain coefficient estimates . Sketch the estimated curve between and . Note the intercepts, slopes, and other relevant information.
Solution to Question 10
Step 1: Write the estimated curve function.
The estimated curve is given by:
This function is piecewise defined due to the indicator function:
-
For : (since )
-
For :
Step 2: Simplify the expression for .
So the piecewise function is:
Step 3: Key features of the curve.
-
For :
- This is a straight line with slope = 1 and y-intercept = 1.
- At :
- At :
- As approaches 1 from the left:
-
For :
- This is a downward-opening parabola (since the coefficient of is negative).
- At : (continuous with the linear part)
- The vertex of the parabola occurs at
- At the vertex :
- At :
-
Continuity and differentiability:
- The curve is continuous at since both pieces give .
- The derivative for is .
- The derivative for is .
- At : left derivative = 1, right derivative = -4(1) + 5 = 1. So the curve is smooth (differentiable) at .
Step 4: Sketch description.
The curve starts at point (-2, -1) and increases linearly with slope 1 until reaching (1, 2). Then it continues as a parabola, rising to a maximum at (1.25, 2.125), and then decreasing to (2, 1). The curve is smooth throughout with no sharp corners.
