Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add version info to OpenVINO models #690

Merged
merged 2 commits into from
Apr 29, 2024

Conversation

helena-intel
Copy link
Collaborator

Versions of optimum-intel and dependencies can have an impact on performance/accuracy. This PR adds transformers, torch, optimum and optimum-intel dependencies to the openvino_model.xml file (the OpenVINO version is already there). This is really useful when debugging issues, or looking into differences.

Example (<optimum> is added):
image

@eaidova @AlexKoff88

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

model.set_rt_info(_transformers_version, ["optimum", "transformers_version"])
model.set_rt_info(_torch_version, ["optimum", "pytorch_version"])
model.set_rt_info(_optimum_intel_version, ["optimum", "optimum_intel_version"])
model.set_rt_info(_optimum_version, ["optimum", "optimum_version"])
Copy link
Collaborator

@eaidova eaidova Apr 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need also add info about nncf here?
and as some other libs may be used as model source for conversion (diffusers for sd, timm for image classification and sentence_transformers), I think it can be useful to provide this libs info too (if they are available or based on source lib for conversion)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for including NNCF version

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added NNCF, diffusers, timm, sentence-transformers and onnx (for models converted from ONNX). I prefer not to add them just when the dependency is available, it's a bit strange to me to have timm_version in a BERT model for example. For timm and diffusers I now get the library name from the model path. Seems to work well, but let me know if there is a better way.

@@ -33,6 +33,7 @@
STR_OPERATION_TO_FUNC = {">": op.gt, ">=": op.ge, "==": op.eq, "!=": op.ne, "<=": op.le, "<": op.lt}

_optimum_version = importlib_metadata.version("optimum")
_optimum_intel_version = importlib_metadata.version("optimum-intel")
Copy link
Collaborator

@eaidova eaidova Apr 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not from optimum.intel.version import __version__?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked the consistency with _optimum_version :) But also the importlib_metadata version gives the commit ID:

>>> from optimum.intel.version import __version__
>>> from optimum.intel.utils.import_utils import _optimum_intel_version
>>> __version__
'1.17.0.dev0'
>>> _optimum_intel_version
'1.17.0.dev0+9d58b66'

Copy link
Collaborator

@echarlaix echarlaix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks @helena-intel

Comment on lines +715 to +719
if any("token_embeddings" in output.get_names() for output in model.outputs):
import sentence_transformers

model.set_rt_info(sentence_transformers.__version__, ["optimum", "sentence_transformers_version"])
if library_name == "diffusers":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not :

Suggested change
if any("token_embeddings" in output.get_names() for output in model.outputs):
import sentence_transformers
model.set_rt_info(sentence_transformers.__version__, ["optimum", "sentence_transformers_version"])
if library_name == "diffusers":
if library_name == "sentence_transformers":
import sentence_transformers
model.set_rt_info(sentence_transformers.__version__, ["optimum", "sentence_transformers_version"])
elif library_name == "diffusers":

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my first attempt, but that didn't work, after optimum-cli export openvino -m BAAI/bge-base-en-v1.5 --library sentence_transformers bge-base-en-ov-st library_name is detected as transformers by TasksManager.infer_library_from_model. If I copy the config_sentence_transformers.json to the model directory, the model is detected as sentence-transformers. Should we save that file for models exported with sentence_transformers library?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, an option could be to re-use the library_name inferred from the original model, this solution also works for me

@echarlaix echarlaix merged commit 4869104 into huggingface:main Apr 29, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants