-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FX][Conformance] Enable Conformance Test for FX Backend #3321
base: develop
Are you sure you want to change the base?
[FX][Conformance] Enable Conformance Test for FX Backend #3321
Conversation
@@ -94,7 +94,7 @@ def _validate(self) -> None: | |||
predictions = np.zeros(dataset_size) | |||
references = -1 * np.ones(dataset_size) | |||
|
|||
if self.backend in FX_BACKENDS and self.torch_compile_validation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do not remove the self.torch_compile_validation
option and made it True by default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but then the default path for FX backend models will be using OV validation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo, my bad, I meant True by default
if file.endswith(".bin"): | ||
bin_file = file | ||
elif file.endswith(".xml"): | ||
xml_file = file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there are several submodels in this dir?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
submodels of the same model? or other models?
I can save in a different folder by the model name if the problem is the latter like this:
torch.compile(exported_model.module(), backend="openvino", options = {"model_caching" : True, "cache_dir": str(self.output_model_dir / self.model_name)})
instead of just "cache_dir": str(self.output_model_dir)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean one model is being cut on several part, as it was with the Yolo 11 model. As far as I remember, this means there are several IRs generated for one model which are run sequentially
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, but the models with graph break should not be supported right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They should have one graph, but it is possible due to bugs in ov/nncf that there are several IRs. I wonder, what will be the result? Perhaps we shouldn't analyze the parts of the model separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then Maybe I can raise an error after a check for multiple bin and xml files in the location. The expected behavior would be to simple rename and replace the files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you update the description with more details? I mean, it's not entirely clear what exactly was done.
@anzr299, the issue 162009 was fixed in the openvino 2025.1, please consider it |
Changes
Above changes were made to obtain the validation path:

Related tickets
163422
Tests