Skip to content

Commit 483d451

Browse files
authored
Finalize releases/2023/3 (openvinotoolkit#182)
1 parent 8b8c8ef commit 483d451

File tree

7 files changed

+14
-20
lines changed

7 files changed

+14
-20
lines changed

.github/workflows/causal_lm_cpp.yml

+6-6
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ jobs:
2323
- name: Install OpenVINO
2424
run: |
2525
mkdir ./ov/
26-
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.0.0-14004-a240ae8fadd/l_openvino_toolkit_ubuntu20_2024.0.0.dev20240116_x86_64.tgz | tar --directory ./ov/ --strip-components 1 -xz
26+
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu20_2023.3.0.13775.ceeafaf64f3_x86_64.tgz | tar --directory ./ov/ --strip-components 1 -xz
2727
sudo ./ov/install_dependencies/install_openvino_dependencies.sh
2828
- name: Download, convert and build
2929
run: |
@@ -49,7 +49,7 @@ jobs:
4949
- name: Install OpenVINO
5050
run: |
5151
mkdir ./ov/
52-
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.0.0-14004-a240ae8fadd/l_openvino_toolkit_ubuntu20_2024.0.0.dev20240116_x86_64.tgz | tar --directory ./ov/ --strip-components 1 -xz
52+
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu20_2023.3.0.13775.ceeafaf64f3_x86_64.tgz | tar --directory ./ov/ --strip-components 1 -xz
5353
sudo ./ov/install_dependencies/install_openvino_dependencies.sh
5454
- name: Download, convert and build
5555
run: |
@@ -138,20 +138,20 @@ jobs:
138138
- name: Install OpenVINO
139139
shell: bash
140140
run: |
141-
curl --output ov.zip https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.0.0-14004-a240ae8fadd/w_openvino_toolkit_windows_2024.0.0.dev20240116_x86_64.zip
141+
curl --output ov.zip https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/windows/w_openvino_toolkit_windows_2023.3.0.13775.ceeafaf64f3_x86_64.zip
142142
unzip ov.zip
143143
- name: Download, convert and build
144144
shell: cmd
145145
run: |
146-
call w_openvino_toolkit_windows_2024.0.0.dev20240116_x86_64\setupvars.bat
146+
call w_openvino_toolkit_windows_2023.3.0.13775.ceeafaf64f3_x86_64\setupvars.bat
147147
python -m pip install --upgrade-strategy eager transformers==4.35.2 "optimum>=1.14" -r ./llm_bench/python/requirements.txt ./thirdparty/openvino_contrib/modules/custom_operations/[transformers] --extra-index-url https://download.pytorch.org/whl/cpu
148148
python ./llm_bench/python/convert.py --model_id TinyLlama/TinyLlama-1.1B-Chat-v1.0 --output_dir ./TinyLlama-1.1B-Chat-v1.0/ --precision FP16 --stateful
149149
cmake -DCMAKE_BUILD_TYPE=Release -S ./text_generation/causal_lm/cpp/ -B ./build/
150150
cmake --build ./build/ --config Release -j
151151
- name: Compare
152152
shell: cmd
153153
run: |
154-
call w_openvino_toolkit_windows_2024.0.0.dev20240116_x86_64\setupvars.bat
154+
call w_openvino_toolkit_windows_2023.3.0.13775.ceeafaf64f3_x86_64\setupvars.bat
155155
convert_tokenizer .\TinyLlama-1.1B-Chat-v1.0\pytorch\dldt\FP16\ --output .\TinyLlama-1.1B-Chat-v1.0\pytorch\dldt\FP16\ --with-detokenizer
156156
157157
.\build\Release\beam_search_causal_lm.exe .\TinyLlama-1.1B-Chat-v1.0\pytorch\dldt\FP16\ "69" > .\pred.txt
@@ -178,7 +178,7 @@ jobs:
178178
- name: Install OpenVINO
179179
run: |
180180
mkdir ./ov/
181-
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.0.0-14004-a240ae8fadd/l_openvino_toolkit_ubuntu20_2024.0.0.dev20240116_x86_64.tgz | tar --directory ./ov/ --strip-components 1 -xz
181+
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu20_2023.3.0.13775.ceeafaf64f3_x86_64.tgz | tar --directory ./ov/ --strip-components 1 -xz
182182
sudo ./ov/install_dependencies/install_openvino_dependencies.sh
183183
- name: Download, convert and build
184184
run: |

.github/workflows/stable_diffusion_1_5_cpp.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ jobs:
2323
run: |
2424
set -e
2525
mkdir openvino
26-
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.0.0-14004-a240ae8fadd/l_openvino_toolkit_ubuntu20_2024.0.0.dev20240116_x86_64.tgz | tar --directory ./openvino/ --strip-components 1 -xz
26+
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu20_2023.3.0.13775.ceeafaf64f3_x86_64.tgz | tar --directory ./openvino/ --strip-components 1 -xz
2727
sudo ./openvino/install_dependencies/install_openvino_dependencies.sh
2828
sudo apt install libeigen3-dev
2929
- name: Download / convert models

README.md

+1-4
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,6 @@ It includes the following pipelines:
99
2. [Text generation C++ samples that support most popular models like LLaMA 2](./text_generation/causal_lm/cpp/)
1010
3. [Stable Diffuison (with LoRA) C++ image generation pipeline](./image_generation/stable_diffusion_1_5/cpp/)
1111

12-
> [!NOTE]
13-
> This project is not for production use.
14-
1512
### License
1613

1714
The GenAI repository is licensed under [Apache License Version 2.0](LICENSE).
@@ -22,6 +19,6 @@ your contribution under these terms.
2219

2320
Requirements may vary for different samples. See respective readme files for more details,
2421
and make sure to install the OpenVINO version listed there. Refer to documentation to see
25-
[how to install OpenVINO](docs.openvino.ai/install).
22+
[how to install OpenVINO](https://docs.openvino.ai/install).
2623

2724
The supported devices are CPU and GPU including Intel discrete GPU.

image_generation/stable_diffusion_1_5/cpp/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ The pure C++ text-to-image pipeline, driven by the OpenVINO native C++ API for S
88

99
C++ Packages:
1010
* [CMake](https://cmake.org/download/): Cross-platform build tool
11-
* [OpenVINO](docs.openvino.ai/install): Model inference
11+
* [OpenVINO](https://docs.openvino.ai/install): Model inference
1212
* [Eigen3](https://anaconda.org/conda-forge/eigen): LoRA enabling
1313

1414
Prepare a python environment and install dependencies:

llm_bench/python/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ pip install -r requirements.txt
6969
note: **You can specify the installed openvino version through pip install**
7070
``` bash
7171
# e.g.
72-
pip install openvino==2023.2.0
72+
pip install openvino==2023.3.0
7373
```
7474

7575
### 4. Run the following command to test the performance of one LLM model
@@ -120,4 +120,4 @@ python ./benchmark.py -m models/llama-2-7b-chat/pytorch -d CPU --torch_compile_b
120120
### 1. NOTE
121121
> If you encounter any errors, please check **[NOTES.md](./doc/NOTES.md)** which provides solutions to the known errors.
122122
### 2. Image generation
123-
> If you need to set parameters for image generation, you need to prepare a prompt file, please check **[IMAGE_GEN.md](./doc/IMAGE_GEN.md)**
123+
> If you need to set parameters for image generation, you need to prepare a prompt file, please check **[IMAGE_GEN.md](./doc/IMAGE_GEN.md)**

text_generation/causal_lm/cpp/README.md

+2-5
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,6 @@
22

33
These examples showcase inference of text-generation Large Language Models (LLMs): `chatglm`, `LLaMA`, `Qwen` and other models with the same signature. The applications don't have many configuration options to encourage the reader to explore and modify the source code. Loading `user_ov_extensions` provided by `openvino-tokenizers` to `ov::Core` enables tokenization. Run `convert_tokenizer` to generate IRs for the samples. [group_beam_searcher.hpp](group_beam_searcher.hpp) implements the algorithm of the same name, which is used by `beam_search_causal_lm`. There is also a Jupyter [notebook](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot) which provides an example of LLM-powered Chatbot in Python.
44

5-
> [!NOTE]
6-
> This project is not for production use.
7-
85
## How it works
96

107
### greedy_causal_lm
@@ -15,9 +12,9 @@ The program loads a tokenizer, a detokenizer and a model (`.xml` and `.bin`) to
1512

1613
The program loads a tokenizer, a detokenizer and a model (`.xml` and `.bin`) to OpenVINO. A prompt is tokenized and passed to the model. The model predicts a distribution over the next tokens and group beam search samples from that distribution to explore possible sequesnses. The result is converted to chars and printed.
1714

18-
## Install OpenVINO Runtime
15+
## Install OpenVINO
1916

20-
Install OpenVINO Runtime from an archive: [Linux](https://docs.openvino.ai/2023.2/openvino_docs_install_guides_installing_openvino_from_archive_linux.html). `<INSTALL_DIR>` below refers to the extraction location.
17+
Install [OpenVINO Archives >= 2023.3](https://docs.openvino.ai/install). `<INSTALL_DIR>` below refers to the extraction location.
2118

2219
## Build `greedy_causal_lm`, `beam_search_causal_lm` and `user_ov_extensions`
2320

thirdparty/openvino_contrib

Submodule openvino_contrib updated 24 files

0 commit comments

Comments
 (0)