Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise / refresh testing infrastructure #67

Open
vankesteren opened this issue Apr 6, 2023 · 3 comments
Open

Revise / refresh testing infrastructure #67

vankesteren opened this issue Apr 6, 2023 · 3 comments
Labels
help wanted Extra attention is needed

Comments

@vankesteren
Copy link
Contributor

I suggest using testthat to refresh the testing infrastructure. This can replace the original test infrastructure in the tests folder.

Advantages:

  • easier to maintain (separate files for unit tests of different things) and extend
  • nicer error reports using devtools::test()
  • faster (you can run a subset of tests when developing)
  • the RSiena package does not need to export variables like tmp3 and tmp4
  • running tests will not put files in the root directory (Siena.txt)
@Kaladani
Copy link
Collaborator

Kaladani commented Jun 9, 2024

When looking through the contribution guidelines, testthat is already recommended but does not seem to be used (at least the test-file directory is wrong). We probably should either actually use testthat or change the information in CONTRIBUTING.md about testing.

@jhollway jhollway added the help wanted Extra attention is needed label Jun 10, 2024
@jhollway
Copy link
Collaborator

+1 to using {testthat}, also to bring {RSiena} in line with other stocnet packages. This is not a trivial change, however, so converting to testthat tests may be a slowish and/or collaborative process. I have added the "help wanted" label.

@Kaladani
Copy link
Collaborator

Kaladani commented Jun 10, 2024

I know of two places currently where tests are defined:

In tests/

  • parallel.R
  • checkEffects.R

Are there any other? checkEffects.R are in all principle simple testthat::expect_equal() checks for the "target statistics" (ans$targets) in addition to checking if a model including the effect runs at all, while parallel.R might need some more involved converting.

Some of the tests involve slow test procedures, e.g. the multiple estimation runs for testing diffusion rate effects in checkEffects.R. When using testthat, we could also decide to set some of the more involved test procedures to skip_on_cran().

The description in CONTRIBUTING.MD might also need some updating, e.g. we probably do not want to have a separate test-file for each function in a file and the checkEffects logic for new effects could be described there as well.

The naming convention for test files is: test-FILENAME_IN_R_DIRECTORY-FUNCTION_NAME.R, i.e. test files are named after the file containing the original function in the R directory, pre-fixed with "test", and optionally post-fixed with the name of the function that is being tested.

I can provide an example for testing the avGroup effect within the testthat-framework, if we want to start converting. (I've also already written another test and set up the testthat infrastructure for the new feature branch sienaMargins).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants