Running with --eval val
still runs evaluation on the test split
#127
Labels
bug
Something isn't working
--eval val
still runs evaluation on the test split
#127
As described in
neural-lam/neural_lam/train_model.py
Lines 163 to 168 in 1c281a2
running with
---eval val
should evaluate the model on the validation split. This used to be the case, but when the data module was introduced this was apparently changed so that evaluation always happens with the test setneural-lam/neural_lam/weather_dataset.py
Lines 655 to 663 in 1c281a2
This is both an error in documentation, but I think we should keep the possibility to run eval separately for the full validation set, as this is highly useful. Here is a possible fix that allows for this: joeloskarsson@deec455
The text was updated successfully, but these errors were encountered: