Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add code path for LLAMA_CPP plugins to load models directly from file #23432

Merged
merged 3 commits into from
Mar 14, 2024

Conversation

vshampor
Copy link
Contributor

Added an extra conditional branch specifically for LLAMA_CPP_* plugins openvinotoolkit/openvino_contrib#891) that need to manage loading the model directly from disk on their own without instantiating ov::Model.

@vshampor vshampor requested a review from a team as a code owner March 13, 2024 10:41
@github-actions github-actions bot added the category: inference OpenVINO Runtime library - Inference label Mar 13, 2024
@ilya-lavrenov ilya-lavrenov added this to the 2024.1 milestone Mar 13, 2024
@ilya-lavrenov
Copy link
Contributor

Please, fix code style

@ilya-lavrenov
Copy link
Contributor

build_jenkins

@ilya-lavrenov ilya-lavrenov enabled auto-merge March 14, 2024 08:11
@ilya-lavrenov ilya-lavrenov added this pull request to the merge queue Mar 14, 2024
Merged via the queue into openvinotoolkit:master with commit 144ba8c Mar 14, 2024
107 checks passed
alvoron pushed a commit to alvoron/openvino that referenced this pull request Apr 29, 2024
…openvinotoolkit#23432)

Added an extra conditional branch specifically for LLAMA_CPP_* plugins
openvinotoolkit/openvino_contrib#891) that need
to manage loading the model directly from disk on their own without
instantiating ov::Model.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: inference OpenVINO Runtime library - Inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants