-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FX][Conformance] Enable Conformance Test for FX Backend #3321
base: develop
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -94,7 +94,7 @@ def _validate(self) -> None: | |
predictions = np.zeros(dataset_size) | ||
references = -1 * np.ones(dataset_size) | ||
|
||
if self.backend in FX_BACKENDS and self.torch_compile_validation: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please do not remove the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. but then the default path for FX backend models will be using OV validation There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Typo, my bad, I meant True by default |
||
if self.backend in FX_BACKENDS: | ||
predictions, references = self._validate_torch_compile(val_loader, predictions, references) | ||
else: | ||
predictions, references = self._validate_ov(val_loader, predictions, references, dataset_size) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there are several submodels in this dir?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
submodels of the same model? or other models?
I can save in a different folder by the model name if the problem is the latter like this:
torch.compile(exported_model.module(), backend="openvino", options = {"model_caching" : True, "cache_dir": str(self.output_model_dir / self.model_name)})
instead of just
"cache_dir": str(self.output_model_dir)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean one model is being cut on several part, as it was with the Yolo 11 model. As far as I remember, this means there are several IRs generated for one model which are run sequentially
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, but the models with graph break should not be supported right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They should have one graph, but it is possible due to bugs in ov/nncf that there are several IRs. I wonder, what will be the result? Perhaps we shouldn't analyze the parts of the model separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then Maybe I can raise an error after a check for multiple bin and xml files in the location. The expected behavior would be to simple rename and replace the files.