-
-
Notifications
You must be signed in to change notification settings - Fork 63
/
Copy pathevaluation_and_benchmarking.qmd
1008 lines (804 loc) · 68.1 KB
/
evaluation_and_benchmarking.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
aliases:
- "/evaluation_and_benchmarking.html"
---
# Evaluation and Benchmarking {#sec-performance}
{{< include ../../common/_setup.qmd >}}
`r chapter = "Evaluation and Benchmarking"`
`r authors(chapter)`
A supervised machine learning model can only be deployed in practice if it has a good `r index("generalization performance", aside = TRUE)`, which means it generalizes well to new, unseen data.
Accurate estimation of the generalization performance is crucial for many aspects of machine learning application and research -- whether we want to fairly compare a novel algorithm with established ones or to find the best algorithm for a particular task.
The concept of `r index("performance estimation")` provides information on how well a model will generalize to new data and plays an important role in the context of model comparison (@sec-benchmarking), model selection, and hyperparameter tuning (@sec-optimization).
Assessing the generalization performance of a model begins with selecting a `r index("performance measure")` that is appropriate for our given task and evaluation goal.
As we have seen in @sec-eval, performance measures typically compute a numeric score indicating how well the model predictions match the ground truth (though some technical measures were seen in @sec-basics-measures-tech).
Once we have decided on a performance measure, the next step is to adopt a strategy that defines how to use the available data to estimate the generalization performance.
Using the same data to train and test a model is a bad strategy as it would lead to an overly optimistic performance estimate.
For example, a model that is overfitted (fit too closely to the data) could make perfect predictions on training data simply by memorizing it and then only make random guesses for new data.
In @sec-basics-partition we introduced `r ref("partition()")`, which splits a dataset into `r index('training data')` -- data for training the model -- and `r index('test data')` -- data for testing the model and estimating the generalization performance, this is known as the holdout strategy (@sec-holdout-scoring) and is where we will begin this chapter.
We will then consider more advanced strategies for assessing the generalization performance (@sec-resampling), look at robust methods for comparing models (@sec-benchmarking), and finally will discuss specialized performance measures for binary classification (@sec-roc).
For an in-depth overview of measures and performance estimation, we recommend @japkowicz2011evaluating.
::: {.callout-warning}
## Resampling Does Not Avoid Model Overfitting
A common **misunderstanding** is that holdout and other more advanced resampling strategies can prevent model overfitting.
In fact, these methods just make overfitting visible as we can separately evaluate train/test performance.
Resampling strategies also allow us to make (nearly) unbiased estimations of the generalization error.
:::
## Holdout and Scoring {#sec-holdout-scoring}
An important goal of ML is to learn a model that can then be used to make predictions about new data.
For this model to be as accurate as possible, we would ideally train it on as much data as is available.
However, data is limited and as we have discussed we cannot train and test a model on the same data.
In practice, one would usually create an `r index('intermediate model', aside = TRUE)`, which is trained on a subset of the available data and then tested on the remainder of the data.
The performance of this intermediate model, obtained by comparing the model predictions to the ground truth, is an estimate of the generalization performance of the final model, which is the model fitted on all data.
The `r index('holdout', aside = TRUE)` strategy is a simple method to create this split between training and testing datasets, whereby the original data is split into two datasets using a defined ratio.
Ideally, the training dataset should be as large as possible so the intermediate model represents the final model as well possible.
If the training data is too small, the intermediate model is unlikely to perform as well as the final model, resulting in a pessimistically biased performance estimate.
On the other hand, if the training data is too large, then we will not have a reliable estimate of the generalization performance due to high variance resulting from small test data.
As a rule of thumb, it is common to use 2/3 of the data for training and 1/3 for testing as this provides a reasonable trade-off between bias and variance of the generalization performance estimate [@kohavi1995;@dobbin2011].
In @sec-basics, we used `r ref("partition()")` to apply the holdout method to a `r ref("Task")` object.
To recap, let us split `tsk("penguins")` with a 2/3 holdout (default split):
```{r performance-003}
tsk_penguins = tsk("penguins")
splits = partition(tsk_penguins)
lrn_rpart = lrn("classif.rpart")
lrn_rpart$train(tsk_penguins, splits$train)
prediction = lrn_rpart$predict(tsk_penguins, splits$test)
```
We can now estimate the generalization performance of a final model by evaluating the quality of the predictions from our intermediate model.
As we have seen in @sec-eval, this is simply a case of choosing one or more measures and passing them to the `$score()` function.
So to estimate the accuracy of our final model we would pass the accuracy measure to our intermediate model:
```{r}
prediction$score(msr("classif.acc"))
```
::: {.callout-tip}
## Permuting Observations for Performance Estimation
When splitting data it is essential to permute observations before, to remove any information that is encoded in data ordering.
The order of data is often informative in real-world datasets, for example hospital data will likely be ordered by time of patient admission.
In `tsk("penguins")`, the data is ordered such that the first 152 rows all have the label 'Adelie', the next 68 have the label 'Chinstrap', and the final 124 have the label 'Gentoo'; so if we did not permute the data we could end up with a model that is only trained on one or two species.
`partition()` and all resampling strategies discussed below automatically randomly split the data to prevent any biases (so do not forget to set a seed for reproducibility).
Data *within* each set may still be ordered because of implementation details, but this is not a problem as long as the data is shuffled between sets.
:::
Many performance measures are based on 'decomposable' losses, which means they compute the differences between the predicted values and ground truth values first on an observation level and then aggregate the individual loss values over the test set into a single numeric score.
For example, the classification accuracy compares whether the predicted values from the `response` column have the same value as the ground truth values from the `truth` column of the `r ref("Prediction")` object.
Hence, for each observation, the decomposable loss takes either value `1` (if `response` and `truth` have the same value) or `0` otherwise.
The `$score()` method summarizes these individual loss values into a an average value -- the percentage where our prediction was correct.
Other performance measures that are not decomposable instead act on a set of observations, we will return to this in detail when we look at the AUC measure in @sec-roc.
@fig-score illustrates the input-output behavior of the `$score()` method, we will return to this when we turn to more complex evaluation strategies.
```{r performance-017, out.width = "80%"}
#| echo: false
#| label: fig-score
#| fig-cap: "Illustration of the `$score()` method which aggregates predictions of multiple observations contained in a prediction object into a single numeric score"
#| fig-alt: A funnel-shaped diagram where the far left box shows the output from a classification prediction object with row_ids, truth, and response columns. Next to this is a box that just says '$score()', which then passes to the right in a funnel shape to a box that says 'classif.acc 0.920354'.
include_multi_graphics("mlr3book_figures-3")
```
## Resampling {#sec-resampling}
`r index("Resampling")` strategies repeatedly split all available data into multiple training and test sets, with one repetition corresponding to what is called a 'resampling iteration' in `r mlr3`.
An intermediate model is then trained on each training set and the test set is used to measure the performance in each resampling iteration.
The generalization performance is finally estimated by aggregating the performance scores over multiple resampling iterations (@fig-ml-abstraction).
By repeating the data splitting process, data points are repeatedly used for both training and testing, allowing more efficient use of all available data for performance estimation.
Furthermore, a high number of resampling iterations can reduce the variance in our scores and thus result in a more reliable performance estimate.
This means that the performance estimate is less likely to be affected by an 'unlucky' split (e.g., a split that does not reflect the original data distribution).
```{r performance-002, echo=FALSE}
#| label: fig-ml-abstraction
#| fig-cap: "A general abstraction of the performance estimation process. The available data is (repeatedly) split into training data and test data (data splitting / resampling process). The learner is trained on each training dataset and produces intermediate models (learning process). Each intermediate model makes predictions based on the features in the test data. The performance measure compares these predictions with the ground truth from the test data and computes a performance value for each test dataset. All performance values are aggregated into a scalar value to estimate the generalization performance (evaluation process)."
#| fig-alt: "A flowchart-like diagram with 3 overlapping boxes. Left box has the caption 'Data splitting / resampling process', upper right box has caption 'Learning process', and lower right box has caption 'Evaluation process'. The process starts in the left box with 'Data' and an arrow to 'Resampling Strategy', which separates into two elements stacked vertically: 'Train Set(s)' above and 'Test Set(s)' below. The 'Train set(s)' element leads to a 'Learner' box, which is inside the larger 'Learning Process' box. A box that says 'Hyperparameters' also sits within the 'Learning Process' and is connected with an arrow also pointing to 'Learner'. An arrow points from the 'Learner' to a stack of 'Intermediate Model(s)'. One thick arrow goes down into the yellow box to a stack of 'Prediction(s)'. An arrow goes from there to 'Performance measure'. The 'Test set(s)' from earlier also have an arrow to 'Performance measure'. From there, a thick arrow goes to 'Performance Value(s)', which has a final dashed arrow to 'Aggregated Performance'."
include_multi_graphics("mlr3book_figures-4")
```
A variety of resampling strategies exist, each with its advantages and disadvantages, which depend on the number of available samples, the task complexity, and the type of model.
A very common strategy is k-fold `r index("cross-validation", aside = TRUE)` (CV), which randomly partitions the data into $k$ non-overlapping subsets, called folds (@fig-cv-illustration).
The $k$ models are always trained on $k-1$ of the folds, with the remaining fold being used as test data; this process is repeated until each fold has acted exactly once as test set.
Finally, the $k$ performance estimates from each fold are aggregated, usually by averaging.
CV guarantees that each observation will be used exactly once in a test set, making efficient use of the available data for performance estimation.
Common values for $k$ are 5 and 10, meaning each training set will consist of 4/5 or 9/10 of the original data, respectively.
Several variations of CV exist, including repeated k-fold cross-validation\index{cross-validation!repeated k-fold} where the k-fold process is repeated multiple times, and `r index('leave-one-out cross-validation', "leave-one-out", parent = "cross-validation")` (LOO-CV) where the number of folds is equal to the number of observations, leading to the test set in each fold consisting of only one observation.
`r index("Subsampling", aside = TRUE)` and `r index("bootstrapping")` are two related resampling strategies.
Subsampling randomly selects a given ratio (4/5 and 9/10 are common) of the data for the training dataset where each observation in the dataset is drawn *without replacement* from the original dataset.
The model is trained on this data and then tested on the remaining data, and this process is repeated $k$ times.
This differs from k-fold CV as the subsets of test data may be overlapping.
Bootstrapping follows the same process as subsampling but data is drawn *with replacement* from the original dataset. Usually the number of bootstrap samples equals the size of the original dataset.
This means an observation could be selected multiple times (and thus duplicated) in the training data (but never more than once per test dataset).
On average, $1 - e^{-1} \approx 63.2\%$ of the data points will be contained in the training set during bootstrapping, referred to as "in-bag" samples (the other 36.8% are known as "out-of-bag" samples).
Note that terminology regarding resampling strategies is not consistent across the literature, for example, subsampling is sometimes referred to as "repeated holdout" \index{repeated holdout|see{subsampling}} or "Monte Carlo cross-validation"\index{Monte Carlo cross-validation|see{subsampling}}.
The choice of the resampling strategy usually depends on the specific task at hand and the goals of the performance assessment, but some rules of thumb are available.
If the available data is fairly small ($N \leq 500$), repeated cross-validation with a large number of repetitions can be used to keep the variance of the performance estimates low (10 folds and 10 repetitions is a good place to start).
Traditionally, LOO-CV has also been recommended for these small sample size regimes, but this estimation scheme is quite expensive (except in special cases where computational shortcuts exist) and (counterintuitively) suffers from quite high variance. Furthermore, LOO-CV is also problematic in imbalanced binary classification tasks as concepts such as stratification (@sec-strat-group) cannot be applied.
For the $500 \leq N \leq 50000$ range, 5- to 10-fold CV is generally recommended.
In general, the larger the dataset, the fewer splits are required, yet sample-size issues can still occur, e.g., due to imbalanced data.
For settings where one is more interested in proper inference (such as through statistical performance tests or confidence intervals) than bare point estimators of performance, bootstrapping and subsampling are often considered, usually with a higher number of iterations. Bootstrapping has become less common, as having repeated observations in training data can lead to problems in some machine learning setups, especially when combined with model selection methods and nested resampling (as duplicated observations can then end up simultaneously in training and test sets in nested schemes). Also note that in all of these common and simple schemes, resampling performance estimates are not independent, as models are fitted on overlapping training data, making proper inference less than trivial, but a proper treatment of these issues is out of scope for us here. For further details and critical discussion we refer to the literature, e.g., @molinaro2005prediction, @kim2009estimating, and @bischl2012resampling.
<!-- Source: https://docs.google.com/presentation/d/1BJXJ365C9TWelojV93IeQJAtEiD3uZMFSfkhzgYH-n8/edit?usp=sharing -->
```{r performance-007, echo=FALSE}
#| label: fig-cv-illustration
#| fig-cap: "Illustration of a three-fold cross-validation."
#| fig-alt: "Complex flow chart in roughly three rows. Top row (Iteration 1) shows Dtrain split into two light blue boxes representing training data and pointing to a 'Learner', which points to a 'Model'. A dark blue box representing test data points to the same 'Model' as well as 'Measure'. 'Model' points to 'Prediction' which also points to 'Measure', which then points to 'Performance', which has an arrow to 'Averaged Performance'. In rows two and three the same process is inferred except with different boxes in dark and light blue so that each box has been dark blue exactly once across all three iterations."
include_multi_graphics("mlr3book_figures-6")
```
In the rest of this section, we will go through querying and constructing resampling strategies in `mlr3`, instantiating train-test splits, and then performing resampling on learners.
### Constructing a Resampling Strategy {#sec-resampling-construct}
All implemented resampling strategies are stored in the `r ref("mlr_resamplings")` dictionary.
```{r performance-008}
as.data.table(mlr_resamplings)
```
The `params` column shows the parameters of each resampling strategy (e.g., the train-test splitting `ratio` or the number of `repeats`) and `iters` displays the number of performed resampling iterations by default.
`r ref("Resampling", aside = TRUE)` objects can be constructed by passing the strategy 'key' to the sugar function `r ref("rsmp()", aside = TRUE)`.
For example, to construct the holdout strategy with a 4/5 split (2/3 by default):
```{r performance-009}
rsmp("holdout", ratio = 0.8)
```
Parameters for objects inheriting from `Resampling` work in the same way as measures and learners and can be set, retrieved, and updated accordingly:
```{r performance-011}
# three-fold CV
cv3 = rsmp("cv", folds = 3)
# Subsampling with 3 repeats and 9/10 ratio
ss390 = rsmp("subsampling", repeats = 3, ratio = 0.9)
# 2-repeats 5-fold CV
rcv25 = rsmp("repeated_cv", repeats = 2, folds = 5)
```
When a `"Resampling"` object is constructed, it is simply a definition for how the data splitting process will be performed on the task when running the resampling strategy.
However, it is possible to manually instantiate a resampling strategy, i.e., generate all train-test splits, by calling the `$instantiate()`\index{\texttt{Resampling}!\texttt{\$instantiate()}}[`$instantiate()`]{.aside} method on a given task.
So carrying on our `tsk("penguins")` example we can instantiate the three-fold CV object and then view the row indices of the data selected for training and testing each fold using `$train_set()` and `$test_set()` respectively:
```{r performance-012}
cv3$instantiate(tsk_penguins)
# first 5 observations in first training set
cv3$train_set(1)[1:5]
# first 5 observations in third test set
cv3$test_set(3)[1:5]
```
When the aim is to fairly compare multiple learners, best practice dictates that all learners being compared use the same training data to build a model and that they use the same test data to evaluate the model performance.
Resampling strategies are instantiated automatically for you when using the `resample()` method, which we will discuss next.
Therefore, manually instantiating resampling strategies is rarely required but might be useful for debugging or digging deeper into a model's performance.
### Resampling Experiments {#sec-resampling-exec}
The `r ref("resample()", aside = TRUE)` function takes a given `Task`, `Learner`, and `r ref("Resampling")` object to run the given resampling strategy.
`resample()` repeatedly fits a model on training sets, makes predictions on the corresponding test sets and stores them in a `r ref("ResampleResult", aside = TRUE)` object, which contains all the information needed to estimate the generalization performance.
```{r performance-013}
rr = resample(tsk_penguins, lrn_rpart, cv3)
rr
```
Each row of the output corresponds to one of the three iterations/folds.
As with `Prediction` objects, we can calculate the score *for each iteration* with `$score()`:
```{r performance-014}
acc = rr$score(msr("classif.ce"))
acc[, .(iteration, classif.ce)]
```
::: {.callout-tip}
## Evaluating Train Sets
By default, `$score()` evaluates the performance in the *test* sets in each iteration, however, you could evaluate the *train* set performance, see @sec-valid-tuning.
:::
While `$score()` returns the performance in each evaluation, `r index('$aggregate()', parent = "Learner", aside = TRUE, code = TRUE)`, returns the aggregated score across all resampling iterations.
```{r}
rr$aggregate(msr("classif.ce"))
```
By default, the majority of measures will aggregate scores using a `r index("macro average")`, which first calculates the measure in each resampling iteration separately, and then averages these scores across all iterations.
However, it is also possible to aggregate scores using a `r index("micro average")`, which pools predictions across resampling iterations into one `r ref("Prediction")` object and then computes the measure on this directly:
```{r performance-015}
rr$aggregate(msr("classif.ce", average = "micro"))
```
We can see a *small* difference between the two methods.
Classification error is a decomposable loss (@sec-holdout-scoring), in fact, if the test sets all had the same size then the micro and macro methods would be identical (see box below).
For errors like AUC, which are defined across the set of observations, the difference between micro- and macro-averaging will be larger.
The default type of aggregation method can be found by querying the `$average` field of a `r ref("Measure")` object.
::: {.callout-tip}
## Macro- and Micro-Averaging
As a simple example to explain macro- and micro-averaging, consider the difference between taking the mean of a vector (micro) compared to the mean of two group-wise means (macro):
```{r}
# macro
mean(mean(c(3, 5, 9)), mean(c(1, 5)))
# micro
mean(c(3, 5, 9, 1, 5))
```
In the example shown in the main text where we used `tsk("penguins")`, there is a difference in the classification error between micro and macro methods because the dataset has 344 rows, which is not divisible by three (the number of folds), hence the test sets are not of an equal size.
Note that the terms "macro-averaging" and "micro-averaging" are not used consistently in the literature, and sometimes refer to different concepts, e.g., the way in which the performance is aggregated across classes in a multi-class classification task.
:::
The aggregated score returned by `$aggregate()` estimates the generalization performance of our selected learner on the given task using the resampling strategy defined in the `Resampling` object.
While we are usually interested in this aggregated score, it can be useful to look at the individual performance values of each resampling iteration (as returned by the `$score()` method) as well, e.g., to see if any of the iterations lead to very different performance results.
@fig-score-aggregate-resampling visualizes the relationship between `$score()` and `$aggregate()` for a small example based on the `"penguins"` task.
```{r performance-017}
#| echo: false
#| label: fig-score-aggregate-resampling
#| fig-cap: "An example of the difference between `$score()` and `$aggregate()`: The former aggregates predictions to a single score within each resampling iteration, and the latter aggregates scores across all resampling iterations."
#| fig-alt: "A funnel-shaped diagram. Left: Each resampling iteration contains multiple rows of predictions, with 3 iterations total. Middle: $score() reduces those to one performance score per resampling iteration, which leaves 3 scores. Right: $aggregate() reduces predictions across all resampling iterations to a single performance score."
include_multi_graphics("mlr3book_figures-5")
```
To visualize the resampling results, you can use the `r ref("mlr3viz::autoplot.ResampleResult()")` function to plot scores across folds as boxplots or histograms (@fig-resamp-viz).
Histograms can be useful to visually gauge the variance of the performance results across resampling iterations, whereas boxplots are often used when multiple learners are compared side-by-side (see @sec-benchmarking).
```{r}
#| eval: false
rr = resample(tsk_penguins, lrn_rpart, rsmp("cv", folds = 10))
autoplot(rr, measure = msr("classif.acc"), type = "boxplot")
autoplot(rr, measure = msr("classif.acc"), type = "histogram")
```
```{r performance-035}
#| layout-ncol: 2
#| label: fig-resamp-viz
#| fig-subcap:
#| - "Boxplot of accuracy scores."
#| - "Histogram of accuracy scores."
#| fig-cap: "Boxplot and Histogram of accuracy scores."
#| message: false
#| fig-alt: "Left: a boxplot ranging from 0.875 to 1.0 and the interquartile range between 0.925 and 0.7. Right: a histogram with five bars in a roughly normal distribution with mean 0.95, minimum 0.875 and maximum 1.0."
#| echo: false
rr = resample(tsk_penguins, lrn_rpart, rsmp("cv", folds = 10))
plt1 = autoplot(rr, measure = msr("classif.acc"), type = "boxplot")
plt1$layers[[1]]$aes_params$fill = "white"
print(plt1)
plt2 = autoplot(rr, measure = msr("classif.acc"), type = "histogram")
plt2$layers[[1]]$aes_params$fill = "white"
print(plt2)
```
### Confidence Intervals {#sec-resampling-ci}
Confidence intervals (CIs) provide a range of values within which we can be confident that it covers the true generalization error.
Instead of relying solely on a single point estimate, CIs offer a measure of uncertainty around this estimate, allowing us to understand the reliability of our performance estimate.
While constructing CIs for the generalization error is challenging due to the complex nature of the inference problem, some methods have been shown to work well in practice @kuempelfischer2024ciforge.
When employing such methods, it is important to be aware that they can fail in some cases -- e.g. in the presence of outliers or instable learning procedures -- and to be aware that the resulting CIs can either be too conservative or too liberal.
The `r ref_pkg("mlr3inferr")` package extends the `mlr3` ecosystem with both inference methods and new resampling strategies.
The inference methods are implemented as `r ref("Measure")` objects, that take in another measure for which to compute the CI.
Below, we demonstrate how to use the inference method suggested by @bayle2020cross to compute a CI for the cross-validation result from above.
As opposed to other mlr3 measures, the result is not a scalar value, but a vector containing the point estimate, as well as the lower and upper bounds of the CI for the specified confidence level.
```{r}
library(mlr3inferr)
# alpha = 0.05 is also the default
msr_ci = msr("ci.wald_cv", msr("classif.acc"), alpha = 0.05)
rr$aggregate(msr_ci)
```
We can also use `msr("ci")`, which will automatically select the appropriate method based on the `Resampling` object, if an inference method is available for it.
```{r}
rr$aggregate(msr("ci", msr("classif.acc")))
```
### ResampleResult Objects {#sec-resampling-inspect}
As well as being useful for estimating the generalization performance, the `r ref("ResampleResult")` object can also be used for model inspection.
We can use the `$predictions()` method to obtain a list of `r ref("Prediction")` objects corresponding to the predictions from each resampling iteration.
This can be used to analyze the predictions of individual intermediate models from each resampling iteration.
To understand the class better, we use it here to manually compute a macro averaged performance estimate.
```{r performance-018}
# list of prediction objects
rrp = rr$predictions()
# print first two
rrp[1:2]
# macro averaged performance
mean(sapply(rrp, function(.x) .x$score()))
```
The `$prediction()` method can be used to extract a single `Prediction` object that combines the predictions of each intermediate model across all resampling iterations.
The combined prediction object can, for example, be used to manually compute a micro-averaged performance estimate (see @sec-resampling-exec for how to you can micro-average more conveniently).
```{r}
prediction = rr$prediction()
prediction
prediction$score()
```
By default, the intermediate models produced at each resampling iteration are discarded after the prediction step to reduce memory consumption of the `ResampleResult` object (only the predictions are required to calculate most performance measures).
However, it can sometimes be useful to inspect, compare, or extract information from these intermediate models.
We can configure the `r ref("resample()")` function to keep the fitted intermediate models by setting `store_models = TRUE`.
Each model trained in a specific resampling iteration can then be accessed via `$learners[[i]]$model`, where `i` refers to the `i`-th resampling iteration:
```{r performance-021}
rr = resample(tsk_penguins, lrn_rpart, cv3, store_models = TRUE)
# get the model from the first iteration
rr$learners[[1]]$model
```
In this example, we could then inspect the most important variables in each iteration to help us learn more about the respective fitted models:
```{r performance-022}
# print 2nd and 3rd iteration
lapply(rr$learners[2:3], function(x) x$model$variable.importance)
```
### Custom Resampling {#sec-resamp-custom}
{{< include ../../common/_optional.qmd >}}
Sometimes it is necessary to perform resampling with custom splits, e.g., to reproduce results reported in a study with pre-defined folds.
A custom holdout resampling strategy can be constructed using `rsmp("custom")`, where the row IDs of the observations used for training and testing must be defined manually when instantiated with a task.
In the example below, we first construct a custom holdout resampling strategy by manually assigning row IDs to the `$train` and `$test` fields, then construct a resampling strategy with two iterations by passing row IDs as list elements:
```{r performance-023}
rsmp_custom = rsmp("custom")
# resampling strategy with two iterations
train_sets = c(1:5, 153:158, 277:280)
rsmp_custom$instantiate(tsk_penguins,
train = list(train_sets, train_sets + 5),
test = list(train_sets + 15, train_sets + 25)
)
resample(tsk_penguins, lrn_rpart, rsmp_custom)$prediction()
```
A custom cross-validation strategy can be more efficiently constructed with `rsmp("custom_cv")`.
In this case, we now have to specify either a custom `factor` variable or a `factor` column from the data to determine the folds.
In the example below, we use a smaller version of `tsk("penguins")` and instantiate a custom two-fold CV strategy using a `factor` variable called `folds` where the first and third rows are used as the test set in Fold 1, and the second and fourth rows are used as the test set in Fold 2:
```{r performance-025}
tsk_small = tsk("penguins")$filter(c(1, 100, 200, 300))
rsmp_customcv = rsmp("custom_cv")
folds = as.factor(c(1, 2, 1, 2))
rsmp_customcv$instantiate(tsk_small, f = folds)
resample(tsk_small, lrn_rpart, rsmp_customcv)$predictions()
```
### Stratification and Grouping {#sec-strat-group}
{{< include ../../common/_optional.qmd >}}
Using column roles (@sec-row-col-roles), it is possible to group or stratify observations according to a particular column in the data.
We will look at each of these in turn.
#### `r index('Grouped Resampling')` {.unlisted .unnumbered}
Keeping observations together when the data is split can be useful, and sometimes essential, during resampling -- spatial analysis (@sec-spatiotemporal) is a prominent example, as observations belong to natural groups (e.g., countries).
When observations belong to groups, we need to ensure all observations of the same group belong to *either* the training set *or* the test set to prevent potential leakage of information between training and testing.
For example, in a longitudinal study, measurements are taken from the same individual at multiple time points.
If we do not group these, we might overestimate the model's generalization capability to unseen individuals, because observations of the same individuals might simultaneously be in the train and test set.
In this context, the leave-one-out cross-validation strategy can be coarsened to the "leave-one-object-out" cross-validation strategy, where all observations associated with a certain group are left out (@fig-group).
```{r performance-026, echo=FALSE}
#| label: fig-group
#| fig-cap: "Illustration of the train-test splits of a leave-one-object-out cross-validation with 3 groups of observations (highlighted by different colors)."
#| fig-alt: "Three images, each shows a green box with text 'Train' and white space around it with text 'Test'. Left (Iteration 1): green box with blue and red dots inside it and yellow dots outside it. Middle (Iteration 2): green box with blue and yellow dots inside it and red dots outside it. Right (Iteration 3): green box with yellow and red dots inside it and blue dots outside it."
include_multi_graphics("mlr3book_figures-7")
```
The `"group"` column role allows us to specify the column in the data that defines the group structure of the observations.
In the following code, we construct a leave-one-out resampling strategy, assign the `"group"` role to the 'year' column of `tsk("penguins")`, instantiate the resampling strategy, and finally show how the years are nicely separated in the first fold.
```{r performance-027}
rsmp_loo = rsmp("loo")
tsk_grp = tsk("penguins")
tsk_grp$set_col_roles("year", "group")
rsmp_loo$instantiate(tsk_grp)
table(tsk_grp$data(rows = rsmp_loo$train_set(1), cols = "year"))
table(tsk_grp$data(rows = rsmp_loo$test_set(1), cols = "year"))
```
Other cross-validation techniques work in a similar way, where folds are determined at a group level (as opposed to an observation level).
#### `r index('Stratified Sampling')` {.unlisted .unnumbered}
Stratified sampling ensures that one or more discrete features within the training and test sets will have a similar distribution as in the original task containing all observations.
This is especially useful when a discrete feature is highly imbalanced and we want to make sure that the distribution of that feature is similar in each resampling iteration (@fig-stratification).
We can also stratify on the target feature to ensure that each intermediate model is fit on training data where the class distribution of the target is representative of the actual task, this is useful to ensure target classes are not strongly under-represented by random chance in individual resampling iterations, which would lead to degenerate estimations of the generalization performance.
```{r performance-028, echo=FALSE}
#| label: fig-stratification
#| fig-cap: "Illustration of a three-fold cross-validation with stratification for an imbalanced binary classification task with a majority class that is about twice as large as the minority class. In each resampling iteration, the class distribution from the available data is preserved (which is not necessarily the case for cross-validation without stratification)."
#| fig-alt: "The figure shows rectangles in yellow and green to represent the majority and minority class respectively. On the left side are rectangles corresponding to the task before it is split; the majority class (yellow) on the left is clearly larger than the minority class (green) on the right. This is labeled 'Imabalanced Class Distribution'. In the next three boxes, labeled 'Iteration 1-3' respectively, the size difference between the majority and minority classes is preserved, i.e., the difference in size between majority and minority classes are equal."
include_multi_graphics("mlr3book_figures-8")
```
Unlike grouping, it is possible to stratify by multiple discrete features using the `"stratum"` column role (@sec-row-col-roles).
In this case, strata would be formed out of each combination of the stratified features, e.g., for two stratified features A and B with levels Aa, Ab; Ba, Bb respectively then the created stratum would have the levels AaBa, AaBb, AbBa, AbBb.
`tsk("penguins")` displays imbalance in the `species` column, as can be seen in the output below:
```{r performance-029}
prop.table(table(tsk_penguins$data(cols = "species")))
```
Without specifying a `"stratum"` column role, the `species` column may have quite different class distributions across the CV folds, as can be seen in the example below.
```{r performance-030}
rsmp_cv10 = rsmp("cv", folds = 10)
rsmp_cv10$instantiate(tsk_penguins)
fold1 = prop.table(table(tsk_penguins$data(rows = rsmp_cv10$test_set(1),
cols = "species")))
fold2 = prop.table(table(tsk_penguins$data(rows = rsmp_cv10$test_set(2),
cols = "species")))
rbind("Fold 1" = fold1, "Fold 2" = fold2)
```
We can see across folds how Chinstrap is represented quite differently (`r round(prop.table(table(tsk_penguins$data(rows = rsmp_cv10$test_set(1), cols = "species")))[2],2)` vs. `r round(prop.table(table(tsk_penguins$data(rows = rsmp_cv10$test_set(2), cols = "species")))[2],2)`)
When imbalance is severe, minority classes might not occur in the training sets entirely.
Consequently, the intermediate models within these resampling iterations will never predict the missing class, resulting in a misleading performance estimate for any resampling strategy without stratification.
The code below uses `species` as `"stratum"` column role to illustrate that the distribution of `species` in each test set will closely match the original distribution:
```{r performance-031}
tsk_str = tsk("penguins")
# set species to have both the 'target' and 'stratum' column role
tsk_str$set_col_roles("species", c("target", "stratum"))
rsmp_cv10$instantiate(tsk_str)
fold1 = prop.table(table(tsk_str$data(rows = rsmp_cv10$test_set(1),
cols = "species")))
fold2 = prop.table(table(tsk_str$data(rows = rsmp_cv10$test_set(2),
cols = "species")))
rbind("Fold 1" = fold1, "Fold 2" = fold2)
```
You can view the observations that fall into each stratum using the `$strata` field of a `Task` object, this can be particularly useful when we are interested in multiple strata:
```{r performance-034}
tsk_str$set_col_roles("year", "stratum")
tsk_str$strata
# N above matches with numbers in table below
table(tsk_penguins$data(cols = c("species", "year")))
```
## Benchmarking {#sec-benchmarking}
Benchmarking in supervised machine learning refers to the comparison of different learners on one or more tasks.
When comparing *multiple learners on a single task* or on a domain consisting of multiple similar tasks, the main aim is often to rank the learners according to a pre-defined performance measure and to identify the best-performing learner for the considered task or domain.
When comparing *multiple learners on multiple tasks*, the main aim is often more of a scientific nature, e.g., to gain insights into how different learners perform in different data situations or whether there are certain data properties that heavily affect the performance of certain learners (or certain hyperparameters of learners).
It is common (and good) practice for algorithm designers to analyze the generalization performance or runtime of a newly proposed learning algorithm in comparison to existing learners in a `r index("benchmark experiment")`.
Since benchmarks usually consist of many evaluations that can be run independently of each other, `mlr3` offers the possibility of parallelizing them automatically, which we demonstrate in @sec-parallel-resample.
In this section, we will focus on the basic setup of benchmark experiments that will be applicable in the majority of use cases, in @sec-large-benchmarking we will look at more complex, large-scale, benchmark experiments.
### benchmark() {#sec-bm-design}
`r index('Benchmark experiments')` in `mlr3` are conducted with `r ref("benchmark()", index = TRUE)`, which simply runs `r ref("resample()", index = TRUE)` on each task and learner separately, then collects the results.
The provided resampling strategy is automatically instantiated on each task to ensure that all learners are compared against the same training and test data.
To use the `benchmark()` function we first call `r ref("benchmark_grid()")`, which constructs an exhaustive *design* to describe all combinations of the learners, tasks and resamplings to be used in a benchmark experiment, and instantiates the resampling strategies.
By example, below we set up a design to see if a random forest, decision tree, or featureless baseline (@sec-basics-featureless), performs best across two classification tasks.
```{r performance-037}
tasks = tsks(c("german_credit", "sonar"))
learners = lrns(c("classif.rpart", "classif.ranger",
"classif.featureless"), predict_type = "prob")
rsmp_cv5 = rsmp("cv", folds = 5)
design = benchmark_grid(tasks, learners, rsmp_cv5)
head(design)
```
The resulting design is essentially just a `data.table`, which can be modified if you want to remove particular combinations or could even be created from scratch without the `benchmark_grid()` function.
Note that this `data.table` has list columns that contain R6 objects of tasks, learners, and resampling instances.
::: {.callout-warning}
## Reproducibility When Using `benchmark_grid()`
By default, `r ref("benchmark_grid()")` instantiates the resamplings on the tasks, which means that concrete train-test splits are generated.
Since this process is stochastic, it is necessary to set a seed **before** calling `benchmark_grid()` to ensure reproducibility of the data splits.
:::
The constructed benchmark design can then be passed to `benchmark()` to run the experiment and the result is a `r ref("BenchmarkResult")` object:
```{r performance-039}
bmr = benchmark(design)
bmr
```
As `benchmark()` is just an extension of `resample()`, we can once again use `$score()`, or `$aggregate()` depending on your use-case, though note that in this case `$score()` will return results over each fold of each learner/task/resampling combination.
```{r performance-040}
bmr$score()[c(1, 7, 13), .(iteration, task_id, learner_id, classif.ce)]
bmr$aggregate()[, .(task_id, learner_id, classif.ce)]
```
This would conclude a basic benchmark experiment where you can draw tentative conclusions about model performance, in this case we would possibly conclude that the random forest is the best of all three models on each task.
We draw conclusions cautiously here as we have not run any statistical tests or included standard errors of measures, so we cannot definitively say if one model outperforms the other.
As the results of `$score()` and `$aggregate()` are returned in a `data.table`, you can post-process and analyze the results in any way you want.
A common *mistake* is to average the learner performance across all tasks when the tasks vary significantly.
This is a mistake as averaging the performance will miss out important insights into how learners compare on 'easier' or more 'difficult' predictive problems.
A more robust alternative to compare the overall algorithm performance across multiple tasks is to compute the ranks of each learner on each task separately and then calculate the average ranks.
This can provide a better comparison as task-specific 'quirks' are taken into account by comparing learners within tasks before comparing them across tasks.
However, using ranks will lose information about the numerical differences between the calculated performance scores.
Analysis of benchmark experiments, including statistical tests, is covered in more detail in @sec-benchmark-analysis.
### BenchmarkResult Objects {#sec-bm-resamp}
A `r ref("BenchmarkResult")` object is a collection of multiple `r ref("ResampleResult", index = TRUE)` objects.
```{r performance-043}
bmrdt = as.data.table(bmr)
bmrdt[1:2, .(task, learner, resampling, iteration)]
```
The contents of a `BenchmarkResult` and `ResampleResult` (@sec-resampling-inspect) are almost identical and the stored `ResampleResult`s can be extracted via the `$resample_result(i)` method, where `i` is the index of the performed resample experiment.
This allows us to investigate the extracted `ResampleResult` and individual resampling iterations as shown in @sec-resampling, as well as the predictions from each fold with `$resample_result(i)$predictions()`.
```{r performance-044}
rr1 = bmr$resample_result(1)
rr1
rr2 = bmr$resample_result(2)
```
In addition, `r ref('as_benchmark_result()')` can be used to convert objects from `ResampleResult` to `BenchmarkResult`.
The `c()`-method can be used to combine multiple `BenchmarkResult` objects, which can be useful when conducting experiments across multiple machines:
```{r performance-045}
bmr1 = as_benchmark_result(rr1)
bmr2 = as_benchmark_result(rr2)
c(bmr1, bmr2)
```
Boxplots are most commonly used to visualize benchmark experiments as they can intuitively summarize results across tasks and learners simultaneously.
```{r performance-046}
#| output: false
#| cache: false
autoplot(bmr, measure = msr("classif.acc"))
```
```{r, out.width = "70%"}
#| fig-height: 5
#| fig-width: 6
#| label: fig-benchmark-box
#| fig-cap: 'Boxplots of accuracy scores for each learner across resampling iterations and the three tasks. Random forests (`lrn("classif.ranger")`) consistently outperforms the other learners.'
#| fig-alt: Nine boxplots, one corresponding to each task/learner combination. In all cases the random forest performs best and the featureless baseline the worst.
#| echo: false
#| warning: false
#| message: false
plt = ggplot2::last_plot()
plt = plt + ggplot2::scale_fill_manual(values = c("grey30", "grey50", "grey70"))
print(plt)
```
It is also possible to plot confidence intervals by setting the type of plot to `"ci"`.
Ignoring the multiple testing problem, @fig-benchmark-ci shows that the difference between the random forest and both other learners is statistically significant for the sonar task, whereas no final conclusion can be drawn for the german credit problem.
```{r}
#| fig-height: 5
#| fig-width: 6
#| label: fig-benchmark-ci
#| fig-cap: 'Confidence intervals for accuracy scores for each learner across resampling iterations and the three tasks. Random forests (`lrn("classif.ranger")`) consistently outperforms the other learners.'
#| fig-alt: Nine confidence intervals, one corresponding to each task/learner combination. In all cases the random forest performs best and the featureless baseline the worst.
#| echo: false
#| warning: false
#| message: false
autoplot(bmr, "ci", measure = msr("ci", msr("classif.acc")))
```
## Evaluation of Binary Classifiers {#sec-roc}
In @sec-basics-classif-learner we touched on the concept of a confusion matrix and how it can be used to break down classification errors in more detail.
In this section, we will look at specialized performance measures for binary classification\index{classification!binary} in more detail.
We will first return to the confusion matrix and discuss measures that can be derived from it and then will look at `r index('ROC', lower = FALSE)` analysis which builds on these measures.
See Chapters 7 and 8 of @provost2013 for a more detailed introduction to ROC measures.
### Confusion Matrix
To recap, a `r index('confusion matrix')` summarizes the following quantities in a two-dimensional contingency table (see also @fig-confusion):
* `r index('True positives')` (TPs): Positive instances that are correctly classified as positive.
* `r index('True negatives')` (TNs): Negative instances that are correctly classified as negative.
* `r index('False positives')` (FPs): Negative instances that are incorrectly classified as positive.
* `r index('False negatives')` (FNs): Positive instances that are incorrectly classified as negative.
Different applications may have a particular interest in one (or multiple) of the aforementioned quantities.
For example, the `tsk("spam")` classification task is concerned with classifying if mail is spam (positive class) or not (negative class).
In this case, we are likely to accept FNs (some spam classified as genuine mail) as long as we have a low number of FPs (genuine and possibly important mail classified as spam).
In another example, say we are predicting if a travel bag contains a weapon (positive class) or not (negative class) at an airport.
This classifier must have a very high number of TPs (as FNs are not acceptable at all), even if this comes at the expense of more FPs (false alarms).
As we saw in @sec-basics-classif-learner, it is possible for a classifier to have a good classification accuracy but to overlook the nuances provided by a full confusion matrix, as in the following `tsk("german_credit")` example:
```{r performance-050}
tsk_german = tsk("german_credit")
lrn_ranger = lrn("classif.ranger", predict_type = "prob")
splits = partition(tsk_german, ratio = 0.8)
lrn_ranger$train(tsk_german, splits$train)
prediction = lrn_ranger$predict(tsk_german, splits$test)
prediction$score(msr("classif.acc"))
prediction$confusion
```
The classification accuracy only takes into account the TPs and TNs, whereas the confusion matrix provides a more holistic picture of the classifier's performance.
On their own, the absolute numbers in a confusion matrix can be less useful when there is class imbalance.
Instead, several normalized measures can be derived (@fig-confusion):
* **True Positive Rate\index{true positive rate}\index{sensitivity|see{measures, true positive rate}}\index{recall|see{measures, true positive rate}} (TPR)**, **Sensitivity** or **Recall**: How many of the true positives did we predict as positive?
* **True Negative Rate\index{true negative rate}\index{specificity|see{measures, true negative rate}} (TNR)** or **Specificity**: How many of the true negatives did we predict as negative?
* **False Positive Rate\index{false positive rate} (FPR)**, or $1 -$ **Specificity**: How many of the true negatives did we predict as positive?
* **Positive Predictive Value\index{positive predictive value}\index{precision|see{measures, positive predictive value}} (PPV)** or **Precision**: If we predict positive how likely is it a true positive?
* **Negative Predictive Value\index{negative predictive value} (NPV)**: If we predict negative how likely is it a true negative?
* **Accuracy (ACC)\index{accuracy}**: The proportion of correctly classified instances out of the total number of instances.
* **F1-score\index{F1}**: The harmonic mean of precision and recall, which balances the trade-off between precision and recall. It is calculated as $2 \times \frac{Precision \times Recall}{Precision + Recall}$.
```{r performance-049}
#| echo: false
#| label: fig-confusion
#| fig-cap: "Binary confusion matrix of ground truth class vs. predicted class."
#| fig-alt: "Representation of a confusion matrix with entries 'TP' when both 'True class y' is '+'' and 'Predicted Class yhat' is '+', 'TN' when both are '-', 'FP' when True is '-' and Predicted is '+' and finally 'FN' when True is '+' but Predicted is '-'. In the margins is 'PPV = TP/(TP+FP)', 'NPV = TN/(FN+TN)', 'ACC=(TP+TN)/(TP+FP+FN+TN)', 'TNR=TN/(FP+TN)', 'TPR=TP/(TP+FN)'."
include_multi_graphics("confusion_matrix")
```
The `r ref_pkg("mlr3measures")` package allows you to compute several common confusion matrix-based measures using the `r ref("mlr3measures::confusion_matrix()")` function:
```{r performance-051}
mlr3measures::confusion_matrix(truth = prediction$truth,
response = prediction$response, positive = tsk_german$positive)
```
We now have a better idea of the random forest predictions on `tsk("german_credit")`, in particular, the false positive rate is quite high.
It is generally difficult to achieve a high TPR and low FPR simultaneously because there is often a trade-off between the two rates.
When a binary classifier predicts probabilities instead of discrete classes (`predict_type = "prob"`), we could set a threshold to cut off the probabilities to change how we assign observations to the positive/negative class (see @sec-classif-prediction).
Increasing the threshold for identifying the positive cases, leads to a higher number of negative predictions, fewer positive predictions, and therefore a lower (and better) FPR but a lower (and worse) TPR -- the reverse holds if we lower the threshold.
Instead of arbitrarily changing a threshold to 'game' these two numbers, a more robust way to tradeoff between TPR and FPR is to use ROC analysis, discussed next.
### ROC Analysis {#sec-roc-space}
`r index('ROC', lower = FALSE)` (Receiver Operating Characteristic) analysis is widely used to evaluate binary classifiers by visualizing the trade-off between the TPR and the FPR.
The ROC curve is a line graph with TPR on the y-axis and the FPR on the x-axis.
To understand the usefulness of this curve, first consider the simple case of a hard labeling classifier (`predict_type = "response"`) that classifies observations as either positive or negative.
This classifier would be represented as a single point in the ROC space (see @fig-roc, panel (a)).
The best classifier would lie on the top-left corner where the TPR is $1$ and the FPR is $0$.
Classifiers on the diagonal predict class labels randomly (with different class proportions).
For example, if each positive instance will be randomly classified (ignoring features) with 25% as the positive class, we would obtain a TPR of 0.25.
If we assign each negative instance randomly to the positive class, we would have an FPR of 0.25.
In practice, we should never obtain a classifier below the diagonal and a point in the ROC space below the diagonal might indicate that the positive and negative class labels have been switched by the classifier.
```{r performance-054, echo = FALSE, fig.height = 3.5, fig.width = 8}
#| label: fig-roc
#| fig-cap: "Panel (a): ROC space with best discrete classifier, two baseline classifiers -- one that always predicts the positive class and one that never predicts the positive class -- and three 'real' classifiers C1, C2, C3. We cannot say if C1 or C3 is better than the other as both are better in one metric. C2 is clearly worse than C1 and C3, which are better in at least one metric than C2 while not being worse in any other metric. Panel (b): ROC curves of the best classifier (AUC = 1), of a random guessing classifier (AUC = 0.5), and the classifiers C1, C3, and C2."
#| fig-alt: "Two plots labeled (a) and (b). Both have 'FPR' between 0-1 on x-axis and 'TPR' between 0-1 on y-axis, both also have a diagonal line y=x with text 'baseline (random classifiers)'. (a): There is a green dot in upper left corner at (0,1). There is a triangle labeled C1 at around (0.1,0.75), a square labeled C2 at around (0.24, 0.75), and a plus labeled C3 at around (0.25, 0.8). (b) is same as (a) except now there are three dashed lines such that each of the points from (a) lies on one of these lines. The lines roughly curve from (0,0) towards (0,1) and then to (1,1)"
#library(gridExtra)
library(ggplot2)
# devtools::install_github("thomasp85/patchwork")
library(patchwork)
set.seed(123)
fun = function(x, lambda) 1 - exp(-lambda*x) #ecdf(rexp(1000000, rate = 5))
funinv = function(x, lambda) 1 + log(x)/lambda
x = c(seq(2e-5, 1, length = 1000))
lambda1 = -1*log(1 - 0.75)/0.125
lambda2 = -1*log(1 - 0.625)/0.25
#lambda3 = -1*log(1 - 0.875)/0.25
d1 = data.frame(x = x, y = fun(x, lambda = lambda1))
d2 = data.frame(x = x, y = fun(x, lambda = lambda2))
d3 = data.frame(x = x, y = funinv(x, lambda = lambda1))#fun(x, lambda = lambda3))
# mean(d1$y)
# mean(d2$y)
# mean(d3$y)
rd = data.frame(x = c(0, 1), y = c(0, 1))
classif = data.frame(x = c(0, 1, 1, 0, 0.125, 0.25, 0.25), y = c(1, 0, 1, 0, 0.75, 0.625, 0.875),
classifier = c("best", "worst", "baseline", "baseline", "C1", "C2", "C3"))
classif = droplevels(classif[-2, ])
p = ggplot(rd, aes(x = x, y = y)) +
# geom_area(mapping = aes(x = x, y = y), fill = "red", alpha = 0.5) +
coord_fixed(ratio = 1) +
ylab(expression(TPR)) + xlab(expression(FPR)) +
theme_bw()
p1 = p +
geom_line(color = 2, lty = 2, linewidth = 0.75) +
geom_text(aes(x = 0.5, y = 0.5, hjust = 0.5, vjust = -0.5, label = "random classifiers"), color = 2, size = 3, angle = 45) +
geom_point(data = classif, aes(x = x, y = y, color = classifier, shape = classifier), size = 3) +
geom_text(data = classif[classif$classifier == "baseline",],
aes(x = x, y = y, hjust = c(1.1, -0.1), vjust = c(0.5, 0.5)),
label = c("always predict positive class", "never predict positive class"),
color = 2, size = 3) +
geom_text(data = classif[grepl("^C", classif$classifier), ],
aes(x = x, y = y, hjust = c(0.5, 0.5, 0.5), vjust = c(-1, -1, -1)),
label = c("C1", "C2", "C3"),
color = c("C1" = "black", "C2" = "black", "C3" = "black"), #c("C1" = "gray70", "C2" = "gray50", "C3" = "gray30"),
size = 3) +
ggtitle("(a)") +
scale_color_manual("classifier",
values = c("best" = 3, "baseline" = 2,
"C1" = "black", "C2" = "black", "C3" = "black"
))
dall = rbind(
cbind(d1, AUC = round(mean(d1$y), 2), classifier = "C1"),
cbind(d2, AUC = round(mean(d2$y), 2), classifier = "C2"),
cbind(d3, AUC = round(mean(d3$y), 2), classifier = "C3"),
cbind(classif[c(3, 1, 2), 1:2], AUC = 1, classifier = "best"),
cbind(rd, AUC = 0.5, classifier = "random")
)
dall$AUC = factor(dall$classifier, levels = c("best", "random", "C1", "C2", "C3"))
#dall$AUC = factor(dall$AUC, levels = sort(unique(dall$AUC), decreasing = TRUE))
lab = c("best \n(AUC = 1)", "random \n(AUC = 0.5)", "C1 (AUC = 0.9)", "C2 (AUC = 0.75)", "C3 (AUC = 0.9)")
p2 = p +
geom_text(aes(x = 0.5, y = 0.5, hjust = 0.5, vjust = -0.5, label = "baseline"), color = 2, size = 3, angle = 45) +
geom_line(data = dall, aes(x = x, y = y, lty = AUC, col = AUC), linewidth = 0.75) + ggtitle("(b)") +
geom_point(data = classif[grepl("^C", classif$classifier), ], aes(x = x, y = y, shape = classifier), size = 3) +
geom_text(data = classif[grepl("^C", classif$classifier), ],
aes(x = x, y = y, hjust = c(0.5, 0.5, 0.5), vjust = c(-1, -1, -1)),
label = c("C1", "C2", "C3"),
color = c("C1" = "black", "C2" = "black", "C3" = "black"),
size = 3) +
ylim(c(0, 1)) +
guides(shape = "none") +
scale_color_manual("ROC curve",
values = c(
"best" = 3,
"random" = 2,
"C1" = "gray70", "C2" = "gray70", "C3" = "gray70"),
labels = lab) +
scale_linetype_manual("ROC curve",
values = c(
"best" = 3,
"random" = 2,
"C1" = 3, "C2" = 4, "C3" = 5),
labels = lab) +
NULL
#ggarrange(p1, p2, nrow = 1, ncol = 2)
# p1 + geom_function(fun = function(x) fun(x, lambda = lambda1), mapping = aes(col = "0.91")) +
# geom_function(fun = function(x) fun(x, lambda = lambda2)) +
# geom_function(fun = function(x) funinv(x, lambda = lambda1))
p1 + p2 & theme(plot.margin = grid::unit(c(0, 0, 0, 0), "mm"))
#p1 + p2 & theme(legend.position = "bottom")
```
Now consider classifiers that predict probabilities instead of discrete classes.
Using different thresholds to cut off predicted probabilities and assign them to the positive and negative class will lead to different TPRs and FPRs and by plotting these values across different thresholds we can characterize the behavior of a binary classifier -- this is the ROC curve.
For example, we can use the previous `r ref("Prediction")` object to compute all possible TPR and FPR combinations by thresholding the predicted probabilities across all possible thresholds, which is exactly what `mlr3viz::autoplot.PredictionClassif` will do when `type = "roc"` is selected:
```{r performance-055}
#| output: false
#| cache: false
autoplot(prediction, type = "roc")
```
```{r, out.width = "70%"}
#| label: fig-basics-roc-ranger
#| fig-cap: "ROC-curve based on the `german_credit` dataset and the `classif.ranger` random forest learner. Recall FPR = $1 -$ Specificity and TPR = Sensitivity."
#| fig-alt: ROC curve with "1 - Specificity" on x-axis (between 0-1) and "Sensitivity" on y-axis (between 0-1). There is a line from around (0,0) to (0.3,0.75) to (1, 1).
#| echo: false
#| warning: false
#| message: false
plt = ggplot2::last_plot()
plt = plt + ggplot2::scale_color_grey()
print(plt)
```
A natural performance measure that can be derived from the ROC curve is the `r index('area under the curve', "AUC", lower = FALSE, aside = TRUE)` (AUC), implemented in `msr("classif.auc")`.
The AUC can be interpreted as the probability that a randomly chosen positive instance has a higher predicted probability of belonging to the positive class than a randomly chosen negative instance.
Therefore, higher values (closer to $1$) indicate better performance.
Random classifiers (such as the featureless baseline) will always have an AUC of (approximately, when evaluated empirically) 0.5 (see @fig-roc, panel (b)).
```{r}
prediction$score(msr("classif.auc"))
```
```{r, echo = FALSE}
x = prediction$score(msr("classif.auc"))
```
Evaluating our random forest on `tsk("german_credit")` results in an AUC of around `r round(x, 2)`, which is acceptable but could be better.
::: {.callout-tip}
## Multiclass ROC and AUC
Extensions of ROC analysis for multiclass classifiers exist [see e.g., @hand2001simple] but we only cover the more common binary classification case in this book.
Generalizations of the AUC measure to multiclass classification are implemented in `mlr3`, see `msr("classif.mauc_au1p")`.
:::
We can also plot the `r index('precision-recall curve', aside = TRUE)` (PRC) which visualizes the PPV/precision\index{positive predictive value} vs. TPR/recall\index{true positive rate}.
The main difference between ROC curves and PR curves is that the number of true-negatives are ignored in the latter.
This can be useful in imbalanced populations where the positive class is rare, and where a classifier with high TPR may still not be very informative and have low PPV.
See @davis2006relationship for a detailed discussion about the relationship between the PRC and ROC curves.
```{r performance-056}
#| output: false
#| cache: false
autoplot(prediction, type = "prc")
```
```{r, out.width = "70%"}
#| fig-cap: 'Precision-Recall curve based on `tsk("german_credit")` and `lrn("classif.ranger")`.'
#| label: fig-basics-prc-ranger
#| fig-alt: 'Line curve with "Recall" on x-axis (between 0-1) and "Precision" on y-axis (between 0-1). There is a horizontal line through around y=0.74. There is also a line decreasing from (0,1) to (1,0.74).'
#| echo: false
#| warning: false
#| message: false
plt = ggplot2::last_plot()
plt = plt + ggplot2::scale_color_grey()
print(plt)
```
Another useful way to think about the performance of a classifier is to visualize the relationship of a performance metric over varying thresholds, for example, see @fig-basics-fpracc-ranger to inspect the FPR and accuracy across all possible thresholds:
```{r performance-057}
#| eval: false
autoplot(prediction, type = "threshold", measure = msr("classif.fpr"))
autoplot(prediction, type = "threshold", measure = msr("classif.acc"))
```
```{r performance-057-1}
#| label: fig-basics-fpracc-ranger
#| fig-cap: 'Comparing threshold and FPR (left) with threshold and accuracy (right) for the random forest trained on `tsk("german_credit")`.'
#| fig-alt: 'Two line graphs, both with "Probability Threshold" on x-axis from 0-1. Left: "classif.fpr" on y-axis. Line slowly decreases from (0,1) to (1,0). Right: "classif.acc" on y-axis. Line travels from (0,0.7) to (0.25,0.7) to (0.4,0.75) to (1, 0.3).'
#| fig-subcap:
#| - FPR
#| - Accuracy
#| layout-ncol: 2
#| echo: false
plt1 = autoplot(prediction, type = "threshold", measure = msr("classif.fpr"))
plt1$layers[[1]]$aes_params$colour = "grey30"
print(plt1)
plt2 = autoplot(prediction, type = "threshold", measure = msr("classif.acc"))
plt2$layers[[1]]$aes_params$colour = "grey30"
print(plt2)
```
This visualization would show us that changing the threshold from the default 0.5 to a higher value like 0.7 would greatly reduce the FPR while reducing accuracy by only a few percentage points.
Depending on the problem at hand, this might be a perfectly desirable trade-off.
These visualizations are also available for `r ref("ResampleResult")` objects.
In this case, the predictions of individual resampling iterations are merged before calculating a ROC or PR curve (micro averaged):
```{r performance-058}
#| eval: false
rr = resample(
task = tsk("german_credit"),
learner = lrn("classif.ranger", predict_type = "prob"),
resampling = rsmp("cv", folds = 5)
)
autoplot(rr, type = "roc")
autoplot(rr, type = "prc")
```
```{r performance-058-1}
#| label: fig-basics-rocpr-ranger
#| layout-ncol: 2
#| fig-subcap:
#| - ROC
#| - PR Curve
#| fig-cap: 'Comparing ROC (left) and PR curve (right) for a random forest trained on `tsk("german_credit")`.'
#| fig-alt: 'Two line graphs. Left is ROC curve with the "best" point close to (0.25, 0.75). Right is PR curve ending at 0.74.'
#| echo: false
#| message: false
rr = resample(
task = tsk("german_credit"),
learner = lrn("classif.ranger", predict_type = "prob"),
resampling = rsmp("cv", folds = 5)
)
plt1 = autoplot(rr, type = "roc")
plt1 = plt1 + ggplot2::scale_fill_grey() + ggplot2::scale_color_grey()
print(plt1)
plt2 = autoplot(rr, type = "prc")
plt2 = plt2 + ggplot2::scale_fill_grey() + ggplot2::scale_color_grey()
print(plt2)
```
Finally, we can visualize ROC/PR curves for a `r ref("BenchmarkResult")` to compare multiple learners on the same `r ref("Task")`:
```{r performance-059-evalF, eval = FALSE}
library(patchwork)
design = benchmark_grid(
tasks = tsk("german_credit"),
learners = lrns(c("classif.rpart", "classif.ranger"),
predict_type = "prob"),
resamplings = rsmp("cv", folds = 5)
)
bmr = benchmark(design)
autoplot(bmr, type = "roc") + autoplot(bmr, type = "prc") +
plot_layout(guides = "collect")
```
```{r performance-059-evalT, echo = FALSE, fig.width = 11}
#| label: fig-basics-rocpr-bmr
#| fig-cap: 'Comparing random forest (green) and decision tree (purple) using ROC and PR Curves.'
#| fig-alt: 'Two line graphs, each with two lines for decision tree and random forest. Left is ROC curve showing random forest has consistently better TPR/FPR trade-off. Right is PR Curve showing random forest has better Precision/Recall trade-off.'
library(patchwork)
design = benchmark_grid(
tasks = tsk("german_credit"),
learners = lrns(c("classif.rpart", "classif.ranger"),
predict_type = "prob"),
resamplings = rsmp("cv", folds = 5)
)
bmr = benchmark(design)
fig = magick::image_graph(width = 1500, height = 1000, res = 200)
autoplot(bmr, type = "roc") + autoplot(bmr, type = "prc") +
plot_layout(guides = "collect")
invisible(dev.off())
magick::image_trim(fig)
```
## Conclusion
In this chapter, we learned how to estimate the generalization performance of a model via resampling strategies, from holdout to cross-validation and bootstrap, and how to automate the comparison of multiple learners in benchmark experiments.
We also covered the basics of performance measures for binary classification, including the confusion matrix, ROC analysis, and precision-recall curves.
These topics are fundamental in supervised learning and will continue to be built upon throughout this book.
In particular, @sec-optimization utilizes evaluation in automated model tuning to improve performance, in @sec-large-benchmarking we look at large benchmarks and their statistical analysis, and in @sec-special we will take a look at specialized tasks that require different resampling strategies.
| Class | Constructor/Function | Fields/Methods |
| ---- | ----- | -------- |
| `r ref("PredictionClassif")` | `classif_lrn$predict()` | `r ref("mlr3measures::confusion_matrix()")`; `autoplot(some_prediction_classif, type = "roc")` |
| - | `r ref("partition()")` | - |
| `r ref("Resampling")` | `r ref("rsmp()")` | `$instantiate()` |
| `r ref("ResampleResult")` | `r ref("resample()")` | `$score()`; `$aggregate()`; `$predictions()`; `as_benchmark_result()`; `autoplot(some_resample_result, type = "roc")` |
| - | `r ref("benchmark_grid()")` | - |
| `r ref("BenchmarkResult")` | `r ref("benchmark()")` | `$aggregate()`; `$resample_result()`; `$score()`; `autoplot(some_benchmark_result, type = "roc")` |
: Important classes and functions covered in this chapter with underlying class (if applicable), class constructor or function, and important class fields and methods (if applicable). {#tbl-api-performance}
## Exercises
1. Apply a repeated cross-validation resampling strategy on `tsk("mtcars")` and evaluate the performance of `lrn("regr.rpart")`.
Use five repeats of three folds each.
Calculate the MSE for each iteration and visualize the result.
Finally, calculate the aggregated performance score.
1. Use `tsk("spam")` and five-fold CV to benchmark `lrn("classif.ranger")`, `lrn("classif.log_reg")`, and `lrn("classif.xgboost", nrounds = 100)` with respect to AUC.
Which learner appears to perform best? How confident are you in your conclusion?
Think about the stability of results and investigate this by re-rerunning the experiment with different seeds.
What can be done to improve this?
1. A colleague reports a 93.1% classification accuracy using `lrn("classif.rpart")` on `tsk("penguins_simple")`.
You want to reproduce their results and ask them about their resampling strategy.
They said they used a custom three-fold CV with folds assigned as `factor(task$row_ids %% 3)`.
See if you can reproduce their results.