Linear Regression Pt. 3 - Casewise Diagnostics

After building our simple and multiple regression model, we turn our attention to casewise diagnostics to learn about which outliers are present in the sample and which data points have undue influence on our model which could affect the models stability. We will focus on the standardized residuals, Cook’s distance, and leverage/hat values, but there are several other measures to assess model diagnostics.

Outliers and standardized residuals

Residuals help us understand how well the model fits the sample data. Standardized residuals are derived by dividing the non-standardized residuals by an estimate of their standard deviation. Standardized residuals can be obtained by applying the rstandard() function on a regression model object. Alternatively, we can use the augment() from the broom package to compute the standardized residuals, Cook’s Distance, and Leverage (hat) values in one command. Standardized residuals primarily serve two roles. First, they facilitate interpretation across different models because the units are in standard deviations rather than the unit of the outcome variable. Second, they serve as an indicator of outliers that may bias the estimated regression coefficients. A couple of general rules are that no more than 5% of the absolute values of the standardized residuals are greater than 2 and no more than 1% of the absolute values of the standardized residuals are greater than 2.5. In our example dataset, about 5.2% of the standardized residuals values are beyond the +/-2 boundary which is evidence that our model may not represent our outcome data well.

# augment() from the broom package
dx <- augment(multiple) %>% select(SALARY, AGE, YEARS, BEAUTY, .fitted, .std.resid, .cooksd, .hat)

# Create a boolean vector of large residuals; greater than 2 or less than -2 
dx$large.residual <- dx$.std.resid > 2 | dx$.std.resid < -2

# Sum of large standardized residuals
sum(dx$large.residual)
## [1] 12
# Percentage of standardized residuals greater than 2 or less than -2
sum(dx$large.residual)/length(dx$large.residual) * 100
## [1] 5.194805
kable(filter(dx, large.residual == TRUE) %>%
    select(SALARY, AGE, YEARS, BEAUTY, .std.resid, large.residual))
SALARYAGEYEARSBEAUTY.std.residlarge.residual
53.7247920.347075.50688668.569992.214829TRUE
95.3380724.171838.53205071.770394.696607TRUE
48.8676619.114514.95102773.326262.241876TRUE
51.0251619.462005.18727580.001412.420635TRUE
56.8315224.411468.75304180.651032.099147TRUE
64.7912918.468394.28432278.917633.440027TRUE
61.3188022.252757.39713878.929172.778123TRUE
89.9800322.288997.41982575.930184.717284TRUE
74.8607524.406828.44476786.092123.319137TRUE
54.5655222.314226.83336788.014702.200115TRUE
50.6557815.274062.98169766.385443.177863TRUE
71.3207320.650615.83455977.576843.531357TRUE

Influential cases: Cook’s distance

One way to determine which cases within a regression model have unde influence in the model parameters is to calculate Cook’s distance. Cook’s distance has a straightforward interpretation - any value greater than 1 may be cause for concern. Cook’s distance values can be obtained with the cooks.distance() function by passing a regressiong model object as its input. However, we will use the dx data frame that was created with the augment() function in the broom package. With this dataset, there are no values greater than 1. This suggest that the model is stable across the sample because none of the cases exert undue influence on the model parameters.

# Create a boolean vector of large residuals; greater than 2 or less than -2 
dx$large.cooksd <- dx$.cooksd > 1

# Sum of large Cook's distance
sum(data$large.cooks.d)
## [1] 0

Influential cases: Leverage/hat values

Leverage/hat values are an additional measure of influential cases. Leverage values can obtained by passing the regression model object to the hatvalues() function, but are already in our dx data frame. Cases with values that are 2 or 3 times as large as (k + 1/n), where k = the number of predictors and n = the sample size, may have undue influence. With these data, values higher than 0.035 and 0.052, depending on how conservative you want to be. There are 25 cases with hat values 2 times greater than the average leverage value, and 3 cases with hat values greater than 3 times the average leverage value.

# Create a boolean vector of large residuals; greater than 2 or less than -2 
# Average Leverage, # of predictors + 1 divided by n
round(((3 + 1)/231) * 2, 3)
## [1] 0.035
round(((3 + 1)/231) * 3, 3)
## [1] 0.052
# Create a boolean vector of large hat values
dx$large.hat <- dx$.hat > ((3 + 1)/231) * 2

# Sum of large leverage 2, conservative
sum(dx$large.hat)
## [1] 25
# Create a boolean vector of large hat values
dx$large.hat <- dx$.hat > ((3 + 1)/231) * 3

# Sum of large leverage 3, less conservative
sum(dx$large.hat)
## [1] 3
# Print table
kable(filter(dx, large.hat == TRUE) %>%
    select(SALARY, AGE, YEARS, BEAUTY, .hat, large.hat))
SALARYAGEYEARSBEAUTY.hatlarge.hat
6.41943118.991145.23798399.221410.0623528TRUE
22.68143625.289669.93215875.422060.0580976TRUE
3.53494216.046534.59869583.590700.0600435TRUE

References

Field, Andy, Jeremy Miles, and Zoe Field. 2012. Discovering Statistics Using R. Sage.

Previous
Next