|
263 | 263 | " notebook_login()\n",
|
264 | 264 | "```\n",
|
265 | 265 | "\n",
|
266 |
| - "* **qwen2-1.5b-instruct/qwen2-7b-instruct** - Qwen2 is the new series of Qwen large language models.Compared with the state-of-the-art open source language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most open source models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.\n", |
267 |
| - "For more details, please refer to [model_card](https://huggingface.co/Qwen/Qwen2-7B-Instruct), [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). \n", |
268 |
| - "* **qwen1.5-0.5b-chat/qwen1.5-1.8b-chat/qwen1.5-7b-chat** - Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. Qwen1.5 is a language model series including decoder language models of different model sizes. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention. You can find more details about model in the [model repository](https://huggingface.co/Qwen).\n", |
| 266 | + "* **qwen2.5-0.5b-instruct/qwen2.5-1.5b-instruct/qwen2.5-3b-instruct/qwen2.5-7b-instruct/qwen2.5-14b-instruct** - Qwen2.5 is the latest series of Qwen large language models. Comparing with Qwen2, Qwen2.5 series brings significant improvements in coding, mathematics and general knowledge skills. Additionally, it brings long-context and multiple languages support including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. \n", |
| 267 | + "For more details, please refer to [model_card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).\n", |
269 | 268 | "* **qwen-7b-chat** - Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. For more details about Qwen, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.\n",
|
270 | 269 | "* **mpt-7b-chat** - MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT-7B-chat is a chatbot-like model for dialogue generation. It was built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. More details about the model can be found in [blog post](https://www.mosaicml.com/blog/mpt-7b), [repository](https://github.com/mosaicml/llm-foundry/) and [HuggingFace model card](https://huggingface.co/mosaicml/mpt-7b-chat).\n",
|
271 | 270 | "* **chatglm3-6b** - ChatGLM3-6B is the latest open-source model in the ChatGLM series. While retaining many excellent features such as smooth dialogue and low deployment threshold from the previous two generations, ChatGLM3-6B employs a more diverse training dataset, more sufficient training steps, and a more reasonable training strategy. ChatGLM3-6B adopts a newly designed [Prompt format](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT_en.md), in addition to the normal multi-turn dialogue. You can find more details about model in the [model card](https://huggingface.co/THUDM/chatglm3-6b)\n",
|
|
686 | 685 | " \"group_size\": 128,\n",
|
687 | 686 | " \"ratio\": 0.5,\n",
|
688 | 687 | " },\n",
|
| 688 | + " \"qwen2.5-7b-instruct\": {\"sym\": True, \"group_size\": 128, \"ratio\": 1.0},\n", |
| 689 | + " \"qwen2.5-3b-instruct\": {\"sym\": True, \"group_size\": 128, \"ratio\": 1.0},\n", |
| 690 | + " \"qwen2.5-14b-instruct\": {\"sym\": True, \"group_size\": 128, \"ratio\": 1.0},\n", |
| 691 | + " \"qwen2.5-1.5b-instruct\": {\"sym\": True, \"group_size\": 128, \"ratio\": 1.0},\n", |
| 692 | + " \"qwen2.5-0.5b-instruct\": {\"sym\": True, \"group_size\": 128, \"ratio\": 1.0},\n", |
689 | 693 | " \"default\": {\n",
|
690 | 694 | " \"sym\": False,\n",
|
691 | 695 | " \"group_size\": 128,\n",
|
|
928 | 932 | "from transformers import AutoConfig, AutoTokenizer\n",
|
929 | 933 | "from optimum.intel.openvino import OVModelForCausalLM\n",
|
930 | 934 | "\n",
|
| 935 | + "import openvino as ov\n", |
931 | 936 | "import openvino.properties as props\n",
|
932 | 937 | "import openvino.properties.hint as hints\n",
|
933 | 938 | "import openvino.properties.streams as streams\n",
|
|
948 | 953 | "\n",
|
949 | 954 | "# On a GPU device a model is executed in FP16 precision. For red-pajama-3b-chat model there known accuracy\n",
|
950 | 955 | "# issues caused by this, which we avoid by setting precision hint to \"f32\".\n",
|
| 956 | + "core = ov.Core()\n", |
| 957 | + "\n", |
951 | 958 | "if model_id.value == \"red-pajama-3b-chat\" and \"GPU\" in core.available_devices and device.value in [\"GPU\", \"AUTO\"]:\n",
|
952 | 959 | " ov_config[\"INFERENCE_PRECISION_HINT\"] = \"f32\"\n",
|
953 | 960 | "\n",
|
|
0 commit comments