Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark evaluations with (outdated) invalid algorithm as failed #3800

Open
amickan opened this issue Jan 27, 2025 · 5 comments
Open

Mark evaluations with (outdated) invalid algorithm as failed #3800

amickan opened this issue Jan 27, 2025 · 5 comments

Comments

@amickan
Copy link
Contributor

amickan commented Jan 27, 2025

When re-evaluating submissions for a challenge phase, it is possible that a submission is no longer valid for the challenge. This can happen if the interface for the phase changed in the meantime (i.e. the algorithm inputs are different now). Re-evaluating such a submission will currently leave the linked evaluation stuck in Executing algorithm state, because:

  • in create_algorithm_jobs we filter_civs_for_algorithm based on the inputs that are set on the algorithm. So even if new archive items are passed to create_algorithm_jobs those get filtered out --> no new jobs to create --> set_evaluation_inputs gets called
  • set_evaluation_inputs does nothing if not self.inputs_complete, where we check that there is exactly 1 successful job per valid archive item

The newly created evaluation for such submission should be marked as Failed (or Cancelled?) with an appropriate error message (i.e. "Algorithm inputs don't match those defined for the phase"). We should check and catch that early, probably in create_algorithm_jobs.

First discovered here: https://github.com/DIAGNijmegen/rse-grand-challenge-admin/issues/428#issuecomment-2615173942

@jmsmkn jmsmkn changed the title Mark submissions with (outdated) invalid algorithm as failed Mark evaluations with (outdated) invalid algorithm as failed Jan 27, 2025
@jmsmkn
Copy link
Member

jmsmkn commented Jan 27, 2025

I think you mean evaluations and not submissions throughout? create_algorithm_jobs is the correct place to check.

@amickan
Copy link
Contributor Author

amickan commented Jan 27, 2025

Hm, we're technically re-evaluating submissions here (because the evaluation container changed), which creates a new evaluation with the updated method, which then gets stuck in the executing algorithm state. Or am I wrong?

I have updated the description to reflect that.

@amickan
Copy link
Contributor Author

amickan commented Jan 27, 2025

And it's the submitted algorithm that is no longer a valid submission for the phase.

@jmsmkn
Copy link
Member

jmsmkn commented Jan 27, 2025

Indeed, but a submission does not have a status, only evaluations do.

@amickan
Copy link
Contributor Author

amickan commented Jan 27, 2025

Indeed, but a submission does not have a status, only evaluations do.

True, which is why I suggested The newly created evaluation for such submission should be marked as Failed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants