You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is linked to the ONNXConfig for all working group created for implementing an ONNXConfig for all available models. Let's extend our work and try to add all models with a fully functional ONNXConfig implemented to ORTConfigManager.
Adding models to ORTConfigManager will allow 🤗 Optimum users to boost even more their model with ONNX optimization capacity!
Feel free to join us in this adventure! Join the org by clicking here
Here is a non-exhaustive list of models that have one ONNXConfig and could be added to ORTConfigManager:
This includes only models with ONNXConfig implemented, if your target model doesn't have an ONNXConfig, please open an issue/or implement it (even cooler) in the 🤗 Transformers repository. Check this issue to know how to do
Albert
BART
BeiT
BERT
BigBird
BigBirdPegasus
Blenderbot
BlenderbotSmall
BLOOM
CamemBERT
CLIP
CodeGen
ConvNext
ConvBert
Data2VecText
Data2VecVision
Deberta
Deberta-v2
DeiT
DETR
Distilbert
ELECTRA
Flaubert
GptBigCode
GPT2
GPTJ
GPT-NEO
GPT-NEOX
I-BERT
LayoutLM
LayoutLMv2
LayoutLMv3
LeViT
Llama
LongT5
M2M100
mBART
MT5
MarianMT
MobileBert
MobileViT
nystromformer
OpenAIGPT-2
PLBart
Pegasus
Perceiver
ResNet
RoFormer
RoBERTa
SqueezeBERT
T5
ViT
Whisper
XLM
XLM-RoBERTa
XLM-RoBERTa-XL
YOLOS
If you want an example of implementation, I did one for MT5#341.
You need to check how the attention_heads number and hidden_size arguments are named in the original implementation of your target model in the 🤗 Transformers source code. And then add it to the _conf dictionary. Finally, add your implemented model to tests to make it fully functional.
The text was updated successfully, but these errors were encountered:
To update the list of supported models: BlenderBot, BLOOM, GptBigCode, GPT-NEOX, GPTJ, LongT5, Llama, mBART, M2M100, nystromformer, Pegasus,T5 ,ViT ,Whisper @michaelbenayoun@fxmarty is there still interest in advancing with other models?
This issue is linked to the ONNXConfig for all working group created for implementing an ONNXConfig for all available models. Let's extend our work and try to add all models with a fully functional ONNXConfig implemented to ORTConfigManager.
Adding models to ORTConfigManager will allow 🤗 Optimum users to boost even more their model with ONNX optimization capacity!
Feel free to join us in this adventure! Join the org by clicking here
Here is a non-exhaustive list of models that have one ONNXConfig and could be added to ORTConfigManager:
This includes only models with ONNXConfig implemented, if your target model doesn't have an ONNXConfig, please open an issue/or implement it (even cooler) in the 🤗 Transformers repository. Check this issue to know how to do
If you want an example of implementation, I did one for
MT5
#341.You need to check how the
attention_heads
number andhidden_size
arguments are named in the original implementation of your target model in the 🤗 Transformers source code. And then add it to the_conf
dictionary. Finally, add your implemented model to tests to make it fully functional.The text was updated successfully, but these errors were encountered: