Skip to content

Commit 1457a7e

Browse files
authored
fix vlm notebooks (#2821)
1 parent 002e7fb commit 1457a7e

File tree

4 files changed

+5
-4
lines changed

4 files changed

+5
-4
lines changed

notebooks/llava-multimodal-chatbot/llava-multimodal-chatbot-genai.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -475,7 +475,7 @@
475475
"\n",
476476
"\n",
477477
"def load_image(image_file):\n",
478-
" if image_file.startswith(\"http\") or image_file.startswith(\"https\"):\n",
478+
" if isinstance(image_file, str) and (image_file.startswith(\"http\") or image_file.startswith(\"https\")):\n",
479479
" response = requests.get(image_file)\n",
480480
" image = Image.open(BytesIO(response.content)).convert(\"RGB\")\n",
481481
" else:\n",

notebooks/llava-multimodal-chatbot/llava-multimodal-chatbot-optimum.ipynb

+2-1
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,7 @@
459459
"\n",
460460
"\n",
461461
"def load_image(image_file):\n",
462-
" if image_file.startswith(\"http\") or image_file.startswith(\"https\"):\n",
462+
" if isinstance(image_file, str) and (image_file.startswith(\"http\") or image_file.startswith(\"https\")):\n",
463463
" response = requests.get(image_file)\n",
464464
" image = Image.open(BytesIO(response.content)).convert(\"RGB\")\n",
465465
" else:\n",
@@ -473,6 +473,7 @@
473473
"\n",
474474
"if not image_file.exists():\n",
475475
" image = load_image(image_url)\n",
476+
" image.save(image_file)\n",
476477
"else:\n",
477478
" image = load_image(image_file)\n",
478479
"\n",

notebooks/llava-next-multimodal-chatbot/llava-next-multimodal-chatbot.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -437,7 +437,7 @@
437437
"\n",
438438
"\n",
439439
"def load_image(image_file):\n",
440-
" if image_file.startswith(\"http\") or image_file.startswith(\"https\"):\n",
440+
" if isinstance(image_file, str) and (image_file.startswith(\"http\") or image_file.startswith(\"https\")):\n",
441441
" response = requests.get(image_file)\n",
442442
" image = Image.open(BytesIO(response.content)).convert(\"RGB\")\n",
443443
" else:\n",

notebooks/qwen2.5-vl/qwen2.5-vl.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -323,7 +323,7 @@
323323
"## Prepare model inference pipeline\n",
324324
"[back to top ⬆️](#Table-of-contents:)\n",
325325
"\n",
326-
"OpenVINO integration with Optimum Intel provides ready-to-use API for model inference that can be used for smooth integration with transformers-based solutions. For loading model, we will use `OVModelForVisualCausalLM` class that have compatible interface with Transformers LLaVA implementation. For loading a model, `from_pretrained` method should be used. It accepts path to the model directory or model_id from HuggingFace hub (if model is not converted to OpenVINO format, conversion will be triggered automatically). Additionally, we can provide an inference device, quantization config (if model has not been quantized yet) and device-specific OpenVINO Runtime configuration. More details about model inference with Optimum Intel can be found in [documentation](https://huggingface.co/docs/optimum/intel/openvino/inference)."
326+
"OpenVINO integration with Optimum Intel provides ready-to-use API for model inference that can be used for smooth integration with transformers-based solutions. For loading model, we will use `OVModelForVisualCausalLM` class that have compatible interface with Transformers Qwen2.5VL implementation. For loading a model, `from_pretrained` method should be used. It accepts path to the model directory or model_id from HuggingFace hub (if model is not converted to OpenVINO format, conversion will be triggered automatically). Additionally, we can provide an inference device, quantization config (if model has not been quantized yet) and device-specific OpenVINO Runtime configuration. More details about model inference with Optimum Intel can be found in [documentation](https://huggingface.co/docs/optimum/intel/openvino/inference)."
327327
]
328328
},
329329
{

0 commit comments

Comments
 (0)