Last updated on 2025-12-25 19:50:52 CET.
| Package | ERROR | OK |
|---|---|---|
| autonewsmd | 13 | |
| BiasCorrector | 13 | |
| DQAgui | 13 | |
| DQAstats | 13 | |
| kdry | 1 | 12 |
| mlexperiments | 1 | 12 |
| mllrnrs | 1 | 12 |
| mlsurvlrnrs | 13 | |
| rBiasCorrection | 13 | |
| sjtable2df | 13 |
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: ERROR: 1, OK: 12
Version: 0.0.2
Check: examples
Result: ERROR
Running examples in ‘kdry-Ex.R’ failed
The error most likely occurred in:
> base::assign(".ptime", proc.time(), pos = "CheckExEnv")
> ### Name: mlh_reshape
> ### Title: mlh_reshape
> ### Aliases: mlh_reshape
>
> ### ** Examples
>
> set.seed(123)
> class_0 <- rbeta(100, 2, 4)
> class_1 <- (1 - class_0) * 0.4
> class_2 <- (1 - class_0) * 0.6
> dataset <- cbind("0" = class_0, "1" = class_1, "2" = class_2)
> mlh_reshape(dataset)
Error in xtfrm.data.frame(list(`0` = 0.219788839894465, `1` = 0.312084464042214, :
cannot xtfrm data frames
Calls: mlh_reshape ... [.data.table -> which.max -> xtfrm -> xtfrm.data.frame
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc
Version: 0.0.2
Check: tests
Result: ERROR
Running ‘testthat.R’ [4s/5s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
>
> library(testthat)
> library(kdry)
>
> test_check("kdry")
Saving _problems/test-mlh-70.R
[ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ]
══ Skipped tests (6) ═══════════════════════════════════════════════════════════
• On CRAN (6): 'test-lints.R:10:5', 'test-rep.R:3:1', 'test-rep.R:22:1',
'test-rep.R:42:1', 'test-rep.R:61:1', 'test-rep.R:75:1'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-mlh.R:70:5'): test mlh - mlh_outsample_row_indices ─────────────
Error in `xtfrm.data.frame(structure(list(`0` = 0.219788839894465, `1` = 0.312084464042214, `2` = 0.468126696063321), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x563bdb8b9070>, .data.table.locked = TRUE))`: cannot xtfrm data frames
Backtrace:
▆
1. ├─kdry::mlh_reshape(dataset) at test-mlh.R:70:5
2. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))]
3. │ └─data.table:::`[.data.table`(...)
4. └─base::which.max(.SD)
5. ├─base::xtfrm(`<data.table>`)
6. └─base::xtfrm.data.frame(`<data.table>`)
[ FAIL 1 | WARN 0 | SKIP 6 | PASS 71 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc
Current CRAN status: ERROR: 1, OK: 12
Version: 0.0.8
Check: examples
Result: ERROR
Running examples in ‘mlexperiments-Ex.R’ failed
The error most likely occurred in:
> base::assign(".ptime", proc.time(), pos = "CheckExEnv")
> ### Name: performance
> ### Title: performance
> ### Aliases: performance
>
> ### ** Examples
>
> dataset <- do.call(
+ cbind,
+ c(sapply(paste0("col", 1:6), function(x) {
+ rnorm(n = 500)
+ },
+ USE.NAMES = TRUE,
+ simplify = FALSE
+ ),
+ list(target = sample(0:1, 500, TRUE))
+ ))
>
> fold_list <- splitTools::create_folds(
+ y = dataset[, 7],
+ k = 3,
+ type = "stratified",
+ seed = 123
+ )
>
> glm_optimization <- mlexperiments::MLCrossValidation$new(
+ learner = LearnerGlm$new(),
+ fold_list = fold_list,
+ seed = 123
+ )
>
> glm_optimization$learner_args <- list(family = binomial(link = "logit"))
> glm_optimization$predict_args <- list(type = "response")
> glm_optimization$performance_metric_args <- list(
+ positive = "1",
+ negative = "0"
+ )
> glm_optimization$performance_metric <- list(
+ auc = metric("AUC"), sensitivity = metric("TPR"),
+ specificity = metric("TNR")
+ )
> glm_optimization$return_models <- TRUE
>
> # set data
> glm_optimization$set_data(
+ x = data.matrix(dataset[, -7]),
+ y = dataset[, 7]
+ )
>
> cv_results <- glm_optimization$execute()
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
>
> # predictions
> preds <- mlexperiments::predictions(
+ object = glm_optimization,
+ newdata = data.matrix(dataset[, -7]),
+ na.rm = FALSE,
+ ncores = 2L,
+ type = "response"
+ )
Error in `[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), :
attempt access index 3/3 in VECTOR_ELT
Calls: <Anonymous> -> [ -> [.data.table
Execution halted
Flavor: r-devel-linux-x86_64-debian-clang
Version: 0.0.8
Check: tests
Result: ERROR
Running ‘testthat.R’ [182s/468s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
>
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mlexperiments)
>
> test_check("mlexperiments")
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold4
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold5
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold4
CV fold: Fold5
Testing for identical folds in 2 and 1.
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold4
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold5
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold4
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold5
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold4
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
CV fold: Fold5
Parameter 'ncores' is ignored for learner 'LearnerGlm'.
Saving _problems/test-glm_predictions-79.R
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold4
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold5
Parameter 'ncores' is ignored for learner 'LearnerLm'.
Saving _problems/test-glm_predictions-188.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 11 times in 2 thread(s)... 25.821 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 0.966 seconds
Noise could not be added to find unique parameter set. Stopping process and returning results so far.
Registering parallel backend using 2 cores.
Running initial scoring function 11 times in 2 thread(s)... 27.299 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.047 seconds
Noise could not be added to find unique parameter set. Stopping process and returning results so far.
Registering parallel backend using 2 cores.
Running initial scoring function 4 times in 2 thread(s)... 12.549 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.107 seconds
3) Running FUN 2 times in 2 thread(s)... 4.992 seconds
CV fold: Fold1
Registering parallel backend using 2 cores.
Running initial scoring function 11 times in 2 thread(s)... 15.024 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.182 seconds
Noise could not be added to find unique parameter set. Stopping process and returning results so far.
CV fold: Fold2
Registering parallel backend using 2 cores.
Running initial scoring function 11 times in 2 thread(s)... 15.514 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.76 seconds
Noise could not be added to find unique parameter set. Stopping process and returning results so far.
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 11 times in 2 thread(s)... 14.044 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.209 seconds
Noise could not be added to find unique parameter set. Stopping process and returning results so far.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerLm'.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 25.531 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.07 seconds
3) Running FUN 2 times in 2 thread(s)... 3.963 seconds
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 12.95 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.055 seconds
3) Running FUN 2 times in 2 thread(s)... 2.257 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 13.459 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.165 seconds
3) Running FUN 2 times in 2 thread(s)... 2.238 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 14.172 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 0.91 seconds
3) Running FUN 2 times in 2 thread(s)... 2.592 seconds
CV fold: Fold1
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
Classification: using 'mean misclassification error' as optimization metric.
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.354 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.301 seconds
3) Running FUN 2 times in 2 thread(s)... 0.677 seconds
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 5.334 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 0.997 seconds
3) Running FUN 2 times in 2 thread(s)... 0.352 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 4.992 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.189 seconds
3) Running FUN 2 times in 2 thread(s)... 0.472 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.47 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.051 seconds
3) Running FUN 2 times in 2 thread(s)... 0.559 seconds
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
[ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• On CRAN (1): 'test-lints.R:10:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ───────
Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT
Backtrace:
▆
1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5
2. ├─...[]
3. └─data.table:::`[.data.table`(...)
── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ───
Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT
Backtrace:
▆
1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5
2. ├─...[]
3. └─data.table:::`[.data.table`(...)
[ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-debian-clang
Current CRAN status: ERROR: 1, OK: 12
Version: 0.0.7
Check: tests
Result: ERROR
Running ‘testthat.R’ [57s/189s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.164 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 8.461 seconds
3) Running FUN 2 times in 2 thread(s)... 1.041 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.32 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 6.19 seconds
3) Running FUN 2 times in 2 thread(s)... 0.57 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.159 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 9.689 seconds
3) Running FUN 2 times in 2 thread(s)... 0.711 seconds
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Saving _problems/test-binary-287.R
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Saving _problems/test-multiclass-162.R
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Saving _problems/test-multiclass-294.R
CV fold: Fold1
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 4.135 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 0.853 seconds
3) Running FUN 2 times in 2 thread(s)... 0.436 seconds
CV fold: Fold2
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 5.017 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.223 seconds
3) Running FUN 2 times in 2 thread(s)... 0.673 seconds
CV fold: Fold3
Registering parallel backend using 2 cores.
Running initial scoring function 5 times in 2 thread(s)... 5.455 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 1.269 seconds
3) Running FUN 2 times in 2 thread(s)... 0.69 seconds
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 7.35 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 9.088 seconds
3) Running FUN 2 times in 2 thread(s)... 0.765 seconds
CV fold: Fold2
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.388 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 2.637 seconds
3) Running FUN 2 times in 2 thread(s)... 0.943 seconds
CV fold: Fold3
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Registering parallel backend using 2 cores.
Running initial scoring function 10 times in 2 thread(s)... 6.687 seconds
Starting Epoch 1
1) Fitting Gaussian Process...
2) Running local optimum search... 14.959 seconds
3) Running FUN 2 times in 2 thread(s)... 0.797 seconds
CV fold: Fold1
CV fold: Fold2
CV fold: Fold3
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:287:5'): test nested cv, grid, binary - ranger ────────
Error in `xtfrm.data.frame(structure(list(`0` = 0.379858310721837, `1` = 0.620141689278164), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55a12d6f8070>, .data.table.locked = TRUE))`: cannot xtfrm data frames
Backtrace:
▆
1. ├─ranger_optimizer$execute() at test-binary.R:287:5
2. │ └─mlexperiments:::.run_cv(self = self, private = private)
3. │ └─mlexperiments:::.fold_looper(self, private)
4. │ ├─base::do.call(private$cv_run_model, run_args)
5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. │ ├─base::do.call(.cv_run_nested_model, args)
7. │ └─mlexperiments (local) `<fn>`(...)
8. │ └─hparam_tuner$execute(k = self$k_tuning)
9. │ └─mlexperiments:::.run_tuning(self = self, private = private, optimizer = optimizer)
10. │ └─mlexperiments:::.run_optimizer(...)
11. │ └─optimizer$execute(x = private$x, y = private$y, method_helper = private$method_helper)
12. │ ├─base::do.call(...)
13. │ └─mlexperiments (local) `<fn>`(...)
14. │ └─base::lapply(...)
15. │ └─mlexperiments (local) FUN(X[[i]], ...)
16. │ ├─base::do.call(FUN, fun_parameters)
17. │ └─mlexperiments (local) `<fn>`(...)
18. │ ├─base::do.call(private$fun_optim_cv, kwargs)
19. │ └─mllrnrs (local) `<fn>`(...)
20. │ ├─base::do.call(ranger_predict, pred_args)
21. │ └─mllrnrs (local) `<fn>`(...)
22. │ └─kdry::mlh_reshape(preds)
23. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))]
24. │ └─data.table:::`[.data.table`(...)
25. └─base::which.max(.SD)
26. ├─base::xtfrm(`<dt[,2]>`)
27. └─base::xtfrm.data.frame(`<dt[,2]>`)
── Error ('test-multiclass.R:162:5'): test nested cv, grid, multiclass - lightgbm ──
Error in `xtfrm.data.frame(structure(list(`0` = 0.20774260202068, `1` = 0.136781829323219, `2` = 0.655475568656101), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55a12d6f8070>, .data.table.locked = TRUE))`: cannot xtfrm data frames
Backtrace:
▆
1. ├─lightgbm_optimizer$execute() at test-multiclass.R:162:5
2. │ └─mlexperiments:::.run_cv(self = self, private = private)
3. │ └─mlexperiments:::.fold_looper(self, private)
4. │ ├─base::do.call(private$cv_run_model, run_args)
5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. │ ├─base::do.call(.cv_run_nested_model, args)
7. │ └─mlexperiments (local) `<fn>`(...)
8. │ └─mlexperiments:::.cv_fit_model(...)
9. │ ├─base::do.call(self$learner$predict, pred_args)
10. │ └─mlexperiments (local) `<fn>`(...)
11. │ ├─base::do.call(private$fun_predict, kwargs)
12. │ └─mllrnrs (local) `<fn>`(...)
13. │ └─kdry::mlh_reshape(preds)
14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))]
15. │ └─data.table:::`[.data.table`(...)
16. └─base::which.max(.SD)
17. ├─base::xtfrm(`<dt[,3]>`)
18. └─base::xtfrm.data.frame(`<dt[,3]>`)
── Error ('test-multiclass.R:294:5'): test nested cv, grid, multi:softprob - xgboost, with weights ──
Error in `xtfrm.data.frame(structure(list(`0` = 0.250160574913025, `1` = 0.124035485088825, `2` = 0.62580394744873), row.names = c(NA, -1L), class = c("data.table", "data.frame"), .internal.selfref = <pointer: 0x55a12d6f8070>, .data.table.locked = TRUE))`: cannot xtfrm data frames
Backtrace:
▆
1. ├─xgboost_optimizer$execute() at test-multiclass.R:294:5
2. │ └─mlexperiments:::.run_cv(self = self, private = private)
3. │ └─mlexperiments:::.fold_looper(self, private)
4. │ ├─base::do.call(private$cv_run_model, run_args)
5. │ └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<named list>`, fold_test = `<named list>`)
6. │ ├─base::do.call(.cv_run_nested_model, args)
7. │ └─mlexperiments (local) `<fn>`(...)
8. │ └─mlexperiments:::.cv_fit_model(...)
9. │ ├─base::do.call(self$learner$predict, pred_args)
10. │ └─mlexperiments (local) `<fn>`(...)
11. │ ├─base::do.call(private$fun_predict, kwargs)
12. │ └─mllrnrs (local) `<fn>`(...)
13. │ └─kdry::mlh_reshape(preds)
14. │ ├─data.table::as.data.table(object)[, cn[which.max(.SD)], by = seq_len(nrow(object))]
15. │ └─data.table:::`[.data.table`(...)
16. └─base::which.max(.SD)
17. ├─base::xtfrm(`<dt[,3]>`)
18. └─base::xtfrm.data.frame(`<dt[,3]>`)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
Error:
! Test failures.
Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13