-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revise / refresh testing infrastructure #67
Comments
When looking through the contribution guidelines, testthat is already recommended but does not seem to be used (at least the test-file directory is wrong). We probably should either actually use testthat or change the information in CONTRIBUTING.md about testing. |
+1 to using |
I know of two places currently where tests are defined: In tests/
Are there any other? checkEffects.R are in all principle simple testthat::expect_equal() checks for the "target statistics" (ans$targets) in addition to checking if a model including the effect runs at all, while parallel.R might need some more involved converting. Some of the tests involve slow test procedures, e.g. the multiple estimation runs for testing diffusion rate effects in checkEffects.R. When using testthat, we could also decide to set some of the more involved test procedures to skip_on_cran(). The description in CONTRIBUTING.MD might also need some updating, e.g. we probably do not want to have a separate test-file for each function in a file and the checkEffects logic for new effects could be described there as well.
I can provide an example for testing the avGroup effect within the testthat-framework, if we want to start converting. (I've also already written another test and set up the testthat infrastructure for the new feature branch sienaMargins). |
I suggest using
testthat
to refresh the testing infrastructure. This can replace the original test infrastructure in thetests
folder.Advantages:
devtools::test()
tmp3
andtmp4
Siena.txt
)The text was updated successfully, but these errors were encountered: