Skip to content

Commit 3d22dda

Browse files
committed
Updated real_models list
1 parent 34dc469 commit 3d22dda

File tree

1 file changed

+18
-9
lines changed

1 file changed

+18
-9
lines changed

tests/python_tests/models/real_models

+18-9
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ EleutherAI/gpt-neo-2.7B
1111
EleutherAI/gpt-neox-20b
1212
EleutherAI/pythia-160m
1313
GAIR/Abel-7B-002
14-
# OrionStarAI/Orion-14B-Base: pip install flash_attn (https://github.com/huggingface/transformers/pull/30954)
14+
OrionStarAI/Orion-14B-Base
1515
PygmalionAI/pygmalion-6b
1616
Qwen/Qwen-7B
1717
Qwen/Qwen-7B-Chat
@@ -21,6 +21,8 @@ Qwen/Qwen1.5-7B
2121
Qwen/Qwen1.5-7B-Chat
2222
Qwen/Qwen1.5-MoE-A2.7B
2323
Qwen/Qwen1.5-MoE-A2.7B-Chat
24+
Qwen/Qwen2-7B
25+
Qwen/Qwen2-7B-Instruct
2426
Salesforce/codegen-350M-multi
2527
Salesforce/codegen-350M-nl
2628
Salesforce/codegen2-1b
@@ -48,15 +50,16 @@ bigscience/bloomz-1b7
4850
bigscience/bloomz-560m
4951
bigscience/bloomz-7b1
5052
cerebras/Cerebras-GPT-13B
51-
# core42/jais-13b: wrong output with PA
52-
# core42/jais-13b-chat: wrong output with PA
53+
core42/jais-13b
54+
core42/jais-13b-chat
5355
databricks/dolly-v1-6b
5456
databricks/dolly-v2-3b
5557
# deepseek-ai/deepseek-coder-33b-instruct: OpenVINO tokenizers - Cannot convert tokenizer of this type without `.model` file
5658
# deepseek-ai/deepseek-coder-6.7b-instruct: OpenVINO tokenizers - Cannot convert tokenizer of this type without `.model` file
57-
# deepseek-ai/deepseek-moe-16b-base: optimum - Trying to export a deepseek model, that is a custom or unsupported architecture
58-
# facebook/blenderbot-3B: optimum - IndexError: tuple index out of range
59-
# facebook/incoder-1B: CB - Failed to detect "eos_token_id" in openvino_tokenizer.xml runtime information
59+
deepseek-ai/deepseek-moe-16b-base
60+
deepseek-ai/DeepSeek-V3-Base
61+
facebook/blenderbot-3B
62+
facebook/incoder-1B
6063
facebook/opt-1.3b
6164
facebook/opt-125m
6265
facebook/opt-2.7b
@@ -66,6 +69,7 @@ google/gemma-1.1-7b-it
6669
google/gemma-2b
6770
google/gemma-2b-it
6871
google/gemma-7b
72+
google/gemma-2-9b
6973
google/pegasus-big_patent
7074
google/pegasus-large
7175
gpt2
@@ -86,6 +90,10 @@ microsoft/DialoGPT-medium
8690
microsoft/Orca-2-7b
8791
microsoft/Phi-3-mini-128k-instruct
8892
microsoft/Phi-3-mini-4k-instruct
93+
microsoft/Phi-3-medium-128k-instruct
94+
microsoft/Phi-3-small-8k-instruct
95+
microsoft/Phi-3-small-128k-instruct
96+
microsoft/Phi-3.5-MoE-instruct
8997
# microsoft/biogpt: OpenVINO Tokenizers - openvino.runtime.exceptions.OVTypeError: Tokenizer type is not supported: <class 'transformers.models.biogpt.tokenization_biogpt.BioGptTokenizer'>
9098
microsoft/phi-1_5
9199
microsoft/phi-2
@@ -106,10 +114,10 @@ openbmb/MiniCPM-2B-dpo-bf16
106114
openbmb/MiniCPM-2B-sft-bf16
107115
openchat/openchat_3.5
108116
openlm-research/open_llama_13b
109-
# openlm-research/open_llama_3b: CPU - head size must be multiple of 16, current: 100
110-
# openlm-research/open_llama_3b_v2: CPU - head size must be multiple of 16, current: 100
117+
openlm-research/open_llama_3b
118+
openlm-research/open_llama_3b_v2
111119
# replit/replit-code-v1-3b: OpenVINO Tokenizers - AttributeError: 'ReplitLMTokenizer' object has no attribute 'sp_model'
112-
# rinna/bilingual-gpt-neox-4b: OpenVINO Tokenizers - trash output (https://jira.devtools.intel.com/browse/CVS-142063)
120+
rinna/bilingual-gpt-neox-4b
113121
rinna/youri-7b-chat
114122
stabilityai/stable-code-3b
115123
stabilityai/stable-zephyr-3b
@@ -120,3 +128,4 @@ tiiuae/falcon-rw-7b
120128
togethercomputer/RedPajama-INCITE-Chat-3B-v1
121129
# xverse/XVERSE-7B-Chat: Transformers - Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 78 column 3
122130
# xverse/XVERSE-MoE-A4.2B: Transformers - Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 78 column 3
131+
Deci/DeciLM-7B

0 commit comments

Comments
 (0)