-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dump model before compress cli #723
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@@ -412,11 +414,24 @@ def ts_patched_forward(*args, **kwargs): | |||
|
|||
if stateful: | |||
patch_stateful(model.config, ov_model) | |||
if ov_config.quantization_config: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is another PR where we probably solve this problem: https://github.com/huggingface/optimum-intel/pull/721/files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nikita-savelyevv PR, as I understand it works only for quantization with dataset for specific cases, when you need to infer model. The goal of my changes remove pytorch model before any weights compression process started for free memory (IR before saving on disk shares weights with pytorch model and additionally may requires own memory on top of that. When we use API for exporting model, weights compression happens after conversion step finished and we already removed pyotrch model from RAM, but when optimum-cli used conversion and compression combined in one step and pytorch model is still alive at compression step)
I open it for experiment only, both changes can be useful, I think we can combine them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, seems like the changes are independent.
@eaidova Just out of curiosity, do you have numbers for how much memory can be saved with this approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, @eaidova. @nikita-savelyevv, can you please adopt changes from this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexKoff88 I don't think this applies to my case because I don't have access to the PyTorch or OpenVINO model objects after the main_export
call. So I can't delete them.
Or do you mean to copy changes from this PR to my PR? If so, in my opinion these should be added separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, I didn't notice that this is for torch models only. Then, it makes sense to keep these changes separately.
What does this PR do?
Fixes # (issue)
Before submitting