-
Notifications
You must be signed in to change notification settings - Fork 514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VitMatte] Cannot export VitMatte model to ONNX #1795
Comments
cc @xenova |
As you can see from the warnings, there are a few casts you have missed:
Fixing these casts will fix the issue. cc @fxmarty what is the recommended way to do this? |
@xenova Didn't you fix it already #1582 (comment)? I can have a look though. |
System Info
Who can help?
@NielsRogge @xenova
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Exactly the same output mentioned here. However, running the exported model gave errors:
As suggested by Xenova, I have to look for Python cast of
int(...)
andfloat(...)
and change them to.to(torch.int64)
and.to(torch.float32)
. This means that the integrated code of VitMatte can be improved to make it ONNX exportable.However, using vscode search function, I could not find any
int(...)
orfloat(...)
cast that has direct relation to VitMatte or Vitdet. Can someone please point me to the right point where should I change the cast? Thank you so much!Expected behavior
Exported ONNX model should be able to be inferenced, as shown here. Since this is not an issue with Optimum, please take a look and give me some guidances. Thank you so much!
The text was updated successfully, but these errors were encountered: