Computational and technical notes on cross-validating regression models
John Fox and Georges Monette
2024-10-16
Source:vignettes/cv-notes.Rmd
cv-notes.Rmd
Efficient computations for linear and generalized linear models
The most straightforward way to implement cross-validation in R for
statistical modeling functions that are written in the canonical manner
is to use update()
to refit the model with each fold
removed. This is the approach taken in the default method for
cv()
, and it is appropriate if the cases are independently
sampled. Refitting the model in this manner for each fold is generally
feasible when the number of folds in modest, but can be prohibitively
costly for leave-one-out cross-validation when the number of cases is
large.
The "lm"
and "glm"
methods for
cv()
take advantage of computational efficiencies by
avoiding refitting the model with each fold removed. Consider, in
particular, the weighted linear model
,
where
.
Here,
is the response vector,
the model matrix, and
the error vector, each for
cases, and
is the vector of
population regression coefficients. The errors are assumed to be
multivariately normally distributed with 0 means and covariance matrix
,
where
is a diagonal matrix of inverse-variance weights. For the linear model
with constant error variance, the weight matrix is taken to be
,
the
order-
identity matrix.
The weighted-least-squares (WLS) estimator of is (see, e.g., Fox, 2016, sec. 12.2.2) 1
Fitted values are then .
The LOO fitted value for the th case can be efficiently computed by where (the so-called βhatvalueβ). Here, is the th row of , and is the th row written as a column vector. This approach can break down when one or more hatvalues are equal to 1, in which case the formula for requires division by 0.
To compute cross-validated fitted values when the folds contain more than one case, we make use of the Woodbury matrix identity (Hager, 1989), where is a nonsingular order- matrix. We apply this result by letting where the subscript represents the vector of indices for the cases in the th fold, . The negative sign in reflects the removal, rather than addition, of the cases in .
Applying the Woodbury identity isnβt quite as fast as using the
hatvalues, but it is generally much faster than refitting the model. A
disadvantage of the Woodbury identity, however, is that it entails
explicit matrix inversion and thus may be numerically unstable. The
inverse of
is available directly in the "lm"
object, but the second
term on the right-hand side of the Woodbury identity requires a matrix
inversion with each fold deleted. (In contrast, the inverse of each
is straightforward because
is diagonal.)
The Woodbury identity also requires that the model matrix be of full rank. We impose that restriction in our code by removing redundant regressors from the model matrix for all of the cases, but that doesnβt preclude rank deficiency from surfacing when a fold is removed. Rank deficiency of doesnβt disqualify cross-validation because all we need are fitted values under the estimated model.
glm()
computes the maximum-likelihood estimates for a
generalized linear model by iterated weighted least squares (see, e.g., Fox & Weisberg, 2019, sec.
6.12). The last iteration is therefore just a WLS fit of the
βworking responseβ on the model matrix using βworking weights.β Both the
working weights and the working response at convergence are available
from the information in the object returned by glm()
.
We then treat re-estimation of the model with a case or cases deleted
as a WLS problem, using the hatvalues or the Woodbury matrix identity.
The resulting fitted values for the deleted fold arenβt exactβthat is,
except for the Gaussian family, the result isnβt identical to what we
would obtain by literally refitting the modelβbut in our (limited)
experience, the approximation is very good, especially for LOO CV, which
is when we would be most tempted to use it. Nevertheless, because these
results are approximate, the default for the "glm"
cv()
method is to perform the exact computation, which
entails refitting the model with each fold omitted.
Letβs compare the efficiency of the various computational methods for
linear and generalized linear models. Consider, for example,
leave-one-out cross-validation for the quadratic regression of
mpg
on horsepower
in the Auto
data, from the introductory βCross-validating regression modelsβ
vignette, repeated here:
data("Auto", package="ISLR2")
m.auto <- lm(mpg ~ poly(horsepower, 2), data = Auto)
summary(m.auto)
#>
#> Call:
#> lm(formula = mpg ~ poly(horsepower, 2), data = Auto)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -14.714 -2.594 -0.086 2.287 15.896
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 23.446 0.221 106.1 <2e-16 ***
#> poly(horsepower, 2)1 -120.138 4.374 -27.5 <2e-16 ***
#> poly(horsepower, 2)2 44.090 4.374 10.1 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 4.37 on 389 degrees of freedom
#> Multiple R-squared: 0.688, Adjusted R-squared: 0.686
#> F-statistic: 428 on 2 and 389 DF, p-value: <2e-16
library("cv")
#> Loading required package: doParallel
#> Loading required package: foreach
#> Loading required package: iterators
#> Loading required package: parallel
summary(cv(m.auto, k = "loo") ) # default method = "hatvalues"
#> n-Fold Cross Validation
#> method: hatvalues
#> criterion: mse
#> cross-validation criterion = 19.248
summary(cv(m.auto, k = "loo", method = "naive"))
#> n-Fold Cross Validation
#> method: naive
#> criterion: mse
#> cross-validation criterion = 19.248
#> bias-adjusted cross-validation criterion = 19.248
#> full-sample criterion = 18.985
summary(cv(m.auto, k = "loo", method = "Woodbury"))
#> n-Fold Cross Validation
#> method: Woodbury
#> criterion: mse
#> cross-validation criterion = 19.248
#> bias-adjusted cross-validation criterion = 19.248
#> full-sample criterion = 18.985
This is a small regression problem and all three computational
approaches are essentially instantaneous, but it is still of interest to
investigate their relative speed. In this comparison, we include the
cv.glm()
function from the boot package
(Canty & Ripley, 2022; Davison & Hinkley,
1997), which takes the naive approach, and for which we have to
fit the linear model as an equivalent Gaussian GLM. We use the
microbenchmark()
function from the package of the same name
for the timings (Mersmann, 2023):
m.auto.glm <- glm(mpg ~ poly(horsepower, 2), data = Auto)
boot::cv.glm(Auto, m.auto.glm)$delta
#> [1] 19.248 19.248
microbenchmark::microbenchmark(
hatvalues = cv(m.auto, k = "loo"),
Woodbury = cv(m.auto, k = "loo", method = "Woodbury"),
naive = cv(m.auto, k = "loo", method = "naive"),
cv.glm = boot::cv.glm(Auto, m.auto.glm),
times = 10
)
#> Warning in microbenchmark::microbenchmark(hatvalues = cv(m.auto, k = "loo"), :
#> less accurate nanosecond times to avoid potential integer overflows
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> hatvalues 1.2142 1.5072 2.4258 1.7864 3.1546 5.1909 10
#> Woodbury 14.6297 15.6145 17.6650 17.5684 18.6949 23.5773 10
#> naive 304.9688 362.5304 392.0443 393.1834 417.6383 461.6384 10
#> cv.glm 543.3056 604.2405 670.9289 679.1045 728.1365 798.3179 10
On our computer, using the hatvalues is about an order of magnitude faster than employing Woodbury matrix updates, and more than two orders of magnitude faster than refitting the model.2
Similarly, letβs return to the logistic-regression model fit to Mrozβs data on womenβs labor-force participation, also employed as an example in the introductory vignette:
data("Mroz", package="carData")
m.mroz <- glm(lfp ~ ., data = Mroz, family = binomial)
summary(m.mroz)
#>
#> Call:
#> glm(formula = lfp ~ ., family = binomial, data = Mroz)
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 3.18214 0.64438 4.94 7.9e-07 ***
#> k5 -1.46291 0.19700 -7.43 1.1e-13 ***
#> k618 -0.06457 0.06800 -0.95 0.34234
#> age -0.06287 0.01278 -4.92 8.7e-07 ***
#> wcyes 0.80727 0.22998 3.51 0.00045 ***
#> hcyes 0.11173 0.20604 0.54 0.58762
#> lwg 0.60469 0.15082 4.01 6.1e-05 ***
#> inc -0.03445 0.00821 -4.20 2.7e-05 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 1029.75 on 752 degrees of freedom
#> Residual deviance: 905.27 on 745 degrees of freedom
#> AIC: 921.3
#>
#> Number of Fisher Scoring iterations: 4
summary(cv(m.mroz, # default method = "exact"
k = "loo",
criterion = BayesRule))
#> n-Fold Cross Validation
#> method: exact
#> criterion: BayesRule
#> cross-validation criterion = 0.32005
#> bias-adjusted cross-validation criterion = 0.3183
#> 95% CI for bias-adjusted CV criterion = (0.28496, 0.35164)
#> full-sample criterion = 0.30677
summary(cv(m.mroz,
k = "loo",
criterion = BayesRule,
method = "Woodbury"))
#> n-Fold Cross Validation
#> method: Woodbury
#> criterion: BayesRule
#> cross-validation criterion = 0.32005
#> bias-adjusted cross-validation criterion = 0.3183
#> 95% CI for bias-adjusted CV criterion = (0.28496, 0.35164)
#> full-sample criterion = 0.30677
summary(cv(m.mroz,
k = "loo",
criterion = BayesRule,
method = "hatvalues"))
#> n-Fold Cross Validation
#> method: hatvalues
#> criterion: BayesRule
#> cross-validation criterion = 0.32005
As for linear models, we report some timings for the various
cv()
methods of computation in LOO CV as well as for the
cv.glm()
function from the boot package
(which, recall, refits the model with each case removed, and thus is
comparable to cv()
with method="exact"
):
microbenchmark::microbenchmark(
hatvalues = cv(
m.mroz,
k = "loo",
criterion = BayesRule,
method = "hatvalues"
),
Woodbury = cv(
m.mroz,
k = "loo",
criterion = BayesRule,
method = "Woodbury"
),
exact = cv(m.mroz, k = "loo", criterion = BayesRule),
cv.glm = boot::cv.glm(Mroz, m.mroz,
cost = BayesRule),
times = 10
)
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> hatvalues 1.3749 1.63 2.3453 2.0867 2.7396 4.103 10
#> Woodbury 49.7300 56.17 70.3412 65.1557 87.7798 101.499 10
#> exact 2257.7525 2683.68 2862.9953 2848.5122 2925.1270 3753.001 10
#> cv.glm 2803.3641 2968.17 3332.0015 3382.5428 3661.7138 3833.141 10
There is a substantial time penalty associated with exact computations.
Computation of the bias-corrected CV criterion and confidence intervals
Let represent a cross-validation cost criterion, such as mean-squared error, computed for all of the values of the response based on fitted values from the model fit to all of the data. We require that is the mean of casewise components, that is, .3 For example, .
We divide the cases into folds of approximately cases each, where . As above, let denote the indices of the cases in the th fold.
Now define . The superscript on represents fitted values computed for all of the cases from the model with fold omitted. Let represent the vector of fitted values for all cases where the fitted value for the th case is computed from the model fit with the fold including the th case omitted (i.e., fold for which ).
Then the cross-validation criterion is just . Following Davison & Hinkley (1997, pp. 293β295), the bias-adjusted cross-validation criterion is
We compute the standard error of CV as that is, as the standard deviation of the casewise components of CV divided by the square-root of the number of cases.
We then use to construct a % confidence interval around the adjusted CV estimate of error: where is the quantile of the standard-normal distribution (e.g, for a 95% confidence interval, for which ).
Bates, Hastie, & Tibshirani (2023) show that the coverage of this confidence interval is poor for small samples, and they suggest a much more computationally intensive procedure, called nested cross-validation, to compute better estimates of error and confidence intervals with better coverage for small samples. We may implement Bates et al.βs approach in a later release of the cv package. At present we use the confidence interval above for sufficiently large , which, based on Bates et al.βs results, we take by default to be .
Why the complement of AUC isnβt a casewise CV criterion
Consider calculating AUC for folds in which a validation set contains observations. To calculate AUC in the validation set, we need the vector of prediction criteria, , and the vector of observed responses in the validation set, with .
To construct the ROC curve, only the ordering of the values in is relevant. Thus, assuming that there are no ties, and reordering observations if necessary, we can set .
If the AUC can be expressed as the casewise mean or sum of a function , where , then must hold for all possible values of . If all have the same value, either 1 or 0, then the definition of AUC is ambiguous. AUC could be considered undefined, or it could be set to 0 if all s are 0 and to 1 if all s are 1. If AUC is considered to be undefined in these cases, we have admissible values for .
Thus, equation () produces either or constraints. Although there are only possible values for the function, equation () could, nevertheless, have consistent solutions. We therefore need to determine whether there is a value of for which () has no consistent solution for all admissible values of . In that eventuality, we will have shown that AUC cannot, in general, be expressed through a casewise sum.
If , we show below that () has no consistent solution if we include all possibilities for , but does if we exclude cases where all s have the same value. If , we show that there are no consistent solutions in either case.
The following R function computes AUC from and , accommodating the cases where is all 0s or all 1s:
AUC <- function(y, yhat = seq_along(y)) {
s <- sum(y)
if (s == 0)
return(0)
if (s == length(y))
return(1)
Metrics::auc(y, yhat)
}
We then define a function to generate all possible s of length as rows of the matrix :
Ymat <- function(n_v, exclude_identical = FALSE) {
stopifnot(n_v > 0 &&
round(n_v) == n_v) # n_v must be a positive integer
ret <- sapply(0:(2 ^ n_v - 1),
function(x)
as.integer(intToBits(x)))[1:n_v,]
ret <- if (is.matrix(ret))
t(ret)
else
matrix(ret)
colnames(ret) <- paste0("y", 1:ncol(ret))
if (exclude_identical)
ret[-c(1, nrow(ret)),]
else
ret
}
For ,
Ymat(3)
#> y1 y2 y3
#> [1,] 0 0 0
#> [2,] 1 0 0
#> [3,] 0 1 0
#> [4,] 1 1 0
#> [5,] 0 0 1
#> [6,] 1 0 1
#> [7,] 0 1 1
#> [8,] 1 1 1
If we exclude s with identical values, then
Ymat(3, exclude_identical = TRUE)
#> y1 y2 y3
#> [1,] 1 0 0
#> [2,] 0 1 0
#> [3,] 1 1 0
#> [4,] 0 0 1
#> [5,] 1 0 1
#> [6,] 0 1 1
Here is with corresponding values of AUC:
cbind(Ymat(3), AUC = apply(Ymat(3), 1, AUC))
#> y1 y2 y3 AUC
#> [1,] 0 0 0 0.0
#> [2,] 1 0 0 0.0
#> [3,] 0 1 0 0.5
#> [4,] 1 1 0 0.0
#> [5,] 0 0 1 1.0
#> [6,] 1 0 1 0.5
#> [7,] 0 1 1 1.0
#> [8,] 1 1 1 1.0
The values of that express AUC as a sum of casewise values are solutions of equation (), which can be written as solutions of the following system of linear simultaneous equations in unknowns: where is a matrix of 1s conformable with ; ; ; and are partitioned matrices; and is a matrix each of whose rows consists of the integers 1 to .
We can test whether equation () has a solution for any given by trying to solve it as a least-squares problem, considering whether the residuals of the associated linear model are all 0, using the βdesign matrixβ to predict the βoutcomeβ :
resids <- function(n_v,
exclude_identical = FALSE,
tol = sqrt(.Machine$double.eps)) {
Y <- Ymat(n_v, exclude_identical = exclude_identical)
AUC <- apply(Y, 1, AUC)
X <- cbind(1 - Y, Y)
opts <- options(warn = -1)
on.exit(options(opts))
fit <- lsfit(X, AUC, intercept = FALSE)
ret <- max(abs(residuals(fit)))
if (ret < tol) {
ret <- 0
solution <- coef(fit)
names(solution) <- paste0("c(", c(1:n_v, 1:n_v), ",",
rep(0:1, each = n_v), ")")
attr(ret, "solution") <- zapsmall(solution)
}
ret
}
The case , excluding identical s, has a solution:
resids(3, exclude_identical = TRUE)
#> [1] 0
#> attr(,"solution")
#> c(1,0) c(2,0) c(3,0) c(1,1) c(2,1) c(3,1)
#> 1.0 0.0 -0.5 0.5 0.0 0.0
But, if identical s are included, the equation is not consistent:
resids(3, exclude_identical = FALSE)
#> [1] 0.125
For , there are no solutions in either case:
resids(4, exclude_identical = TRUE)
#> [1] 0.083333
resids(4, exclude_identical = FALSE)
#> [1] 0.25
Consequently, the widely employed AUC measure of fit for binary regression cannot in general be used for a casewise cross-validation criterion.