uses a single training/test split to train a penalized regression model in the training samples, then use the model to calculate values of the linear risk score in the test samples. This function is used by opt.nested.crossval, but can also be used on its own.

This function support z-score scaling of training data, and application of these scaling and shifting coefficients to the test data. It also supports repeated tuning of the penalty parameters and selection of the model with greatest cross-validated likelihood.

opt.splitval(optFUN="opt1D",testset="equal",scaling=TRUE,...)

## Arguments

optFUN "opt1D" for Lasso or Ridge regression, "opt2D" for Elastic Net. See the help pages for these functions for additional arguments. For the opt.splitval function ONLY. "equal" for randomly assigned equal training and test sets, or an integer vector defining the positions of the test samples in the response, penalized, and unpenalized arguments which are passed to the optL1, optL2, or cvl functions of the penalized R package. If TRUE, each feature (column) of the training samples (in matrix/dataframe specified by the penalized argument) are scaled to z-scores, then these scaling and shifting factors are applied to the test data. If FALSE, no scaling is done. Additional arguments are required, to be passed to the optL1 or optL2 function of the penalized R package. See those help pages, and it may be desirable to test these arguments directly on optL1 or optL2 before using this more CPU-consuming and complex function.

## Details

This function does split sample model training and testing for a single split of the data, using the optL1 or optL2 functions of the penalized R package, for each iteration of the cross-validation. Scaling of the test samples is done independently, using scale factors determined from the training samples. Repeated starts of model training can be parallelized as documented in the opt1D and opt2D functions. This function is used for nested cross-validation by the opt.nested.crossval function.

## Value

Returns a vector of cross-validated continuous risk score predictions.

## References

Waldron L, Pintilie M, Tsao M-S, Shepherd FA, Huttenhower C*, Jurisica I*: Optimized application of penalized regression methods to diverse genomic data. Bioinformatics 2011, 27:3399-3406. (*equal contribution)

## Note

Depends on the R packages: penalized, parallel, rlecuyer

opt1D, opt2D, opt.nested.crossval

## Examples

data(beer.exprs)
data(beer.survival)

## select just 250 genes to speed computation:
set.seed(1)
beer.exprs.sample <- beer.exprs[sample(1:nrow(beer.exprs), 250), ]

gene.quant <- apply(beer.exprs.sample, 1, quantile, probs = 0.75)
dat.filt <- beer.exprs.sample[gene.quant > log2(100),]
gene.iqr <- apply(dat.filt, 1, IQR)
dat.filt <- as.matrix(dat.filt[gene.iqr > 0.5,])
dat.filt <- t(dat.filt)

library(survival)
surv.obj <- Surv(beer.survival$os, beer.survival$status)

## Single split training/test evaluation.  Ideally nsim would be 50 and
## fold=10, but this requires 100x more resources.
set.seed(1)
preds50 <- opt.splitval(
optFUN = "opt1D",
scaling = TRUE,
testset = "equal",
setpen = "L1",
nsim = 1,
nprocessors = 1,
response = surv.obj,
penalized = dat.filt,
fold = 5,
positive = FALSE,
standardize = FALSE,
trace = FALSE
)

preds50.dichot <- preds50 > median(preds50)

surv.obj.50 <-
surv.obj[match(names(preds50), rownames(beer.survival))]
coxfit50.continuous <- coxph(surv.obj.50 ~ preds50)
coxfit50.dichot <- coxph(surv.obj.50 ~ preds50.dichot)
summary(coxfit50.continuous)#> Call:
#> coxph(formula = surv.obj.50 ~ preds50)
#>
#>   n= 43, number of events= 13
#>
#>           coef exp(coef) se(coef)     z Pr(>|z|)
#> preds50  4.142    62.927    3.026 1.369    0.171
#>
#>         exp(coef) exp(-coef) lower .95 upper .95
#> preds50     62.93    0.01589    0.1671     23695
#>
#> Concordance= 0.618  (se = 0.084 )
#> Likelihood ratio test= 1.84  on 1 df,   p=0.2
#> Wald test            = 1.87  on 1 df,   p=0.2
#> Score (logrank) test = 1.9  on 1 df,   p=0.2
#> summary(coxfit50.dichot)#> Call:
#> coxph(formula = surv.obj.50 ~ preds50.dichot)
#>
#>   n= 43, number of events= 13
#>
#>                     coef exp(coef) se(coef)     z Pr(>|z|)
#> preds50.dichotTRUE 0.623     1.865    0.560 1.113    0.266
#>
#>                    exp(coef) exp(-coef) lower .95 upper .95
#> preds50.dichotTRUE     1.865     0.5363    0.6222     5.588
#>
#> Concordance= 0.561  (se = 0.071 )
#> Likelihood ratio test= 1.24  on 1 df,   p=0.3
#> Wald test            = 1.24  on 1 df,   p=0.3
#> Score (logrank) test = 1.28  on 1 df,   p=0.3
#>