Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump the genai-workflow group across 1 directory with 11 updates #298

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Aug 12, 2024

Bumps the genai-workflow group with 11 updates in the /workflows/charts/huggingface-llm directory:

Package From To
accelerate 0.30.1 0.33.0
datasets 2.19.0 2.20.0
einops 0.7.0 0.8.0
mkl-include 2023.2.0 2024.2.1
mkl 2023.2.0 2024.2.1
onnxruntime-extensions 0.10.1 0.11.0
onnxruntime 1.17.3 1.18.1
peft 0.11.1 0.12.0
protobuf 4.24.4 5.27.3
psutil 5.9.5 6.0.0
tokenizers 0.19.1 0.20.0

Updates accelerate from 0.30.1 to 0.33.0

Release notes

Sourced from accelerate's releases.

v0.33.0: MUSA backend support and bugfixes

MUSA backend support and bugfixes

Small release this month, with key focuses on some added support for backends and bugs:

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v0.32.1...v0.33.0

v0.32.0: Profilers, new hooks, speedups, and more!

Core

  • Utilize shard saving from the huggingface_hub rather than our own implementation (huggingface/accelerate#2795)
  • Refactor logging to use logger in dispatch_model (huggingface/accelerate#2855)
  • The Accelerator.step number is now restored when using save_state and load_state (huggingface/accelerate#2765)
  • A new profiler has been added allowing users to collect performance metrics during model training and inference, including detailed analysis of execution time and memory consumption. These can then be generated in Chrome's tracing tool. Read more about it here (huggingface/accelerate#2883)
  • Reduced import times for doing import accelerate and any other major core import by 68%, now should be only slightly longer than doing import torch (huggingface/accelerate#2845)
  • Fixed a bug in get_backend and added a clear_device_cache utility (huggingface/accelerate#2857)

Distributed Data Parallelism

FSDP

  • If the output directory doesn't exist when using accelerate merge-weights, one will be automatically created (huggingface/accelerate#2854)

... (truncated)

Commits
  • 28a3b98 Release: v0.33.0
  • 415eddf feat(ci): add pip caching in CI (#2952)
  • 2308576 Properly handle Params4bit in set_module_tensor_to_device (#2934)
  • a5a3e57 Add torch.float8_e4m3fn format dtype_byte_size (#2945)
  • 0af1d8b delete CCL env var setting (#2927)
  • d16d737 Improve test reliability for Accelerator.free_memory() (#2935)
  • 7a5c231 Consider pynvml available when installed through the nvidia-ml-py distributio...
  • 4f02bb7 Fix import test (#2931)
  • 709fd1e Hotfix PyTorch Version Installation in CI Workflow for Minimum Version Matrix...
  • f4f1260 Correct loading of models with shared tensors when using accelerator.load_sta...
  • Additional commits viewable in compare view

Updates datasets from 2.19.0 to 2.20.0

Release notes

Sourced from datasets's releases.

2.20.0

Important

Datasets features

  • [Resumable IterableDataset] Add IterableDataset state_dict by @​lhoestq in huggingface/datasets#6658
    • checkpoint and resume an iterable dataset (e.g. when streaming):

      >>> iterable_dataset = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)
      >>> for idx, example in enumerate(iterable_dataset):
      ...     print(example)
      ...     if idx == 2:
      ...         state_dict = iterable_dataset.state_dict()
      ...         print("checkpoint")
      ...         break
      >>> iterable_dataset.load_state_dict(state_dict)
      >>> print(f"restart from checkpoint")
      >>> for example in iterable_dataset:
      ...     print(example)

      Returns:

      {'a': 0}
      {'a': 1}
      {'a': 2}
      checkpoint
      restart from checkpoint
      {'a': 3}
      {'a': 4}
      {'a': 5}
      

General improvements and bug fixes

... (truncated)

Commits

Updates einops from 0.7.0 to 0.8.0

Release notes

Sourced from einops's releases.

v0.8.0: tinygrad, small fixes and updates

TLDR

  • tinygrad backend added
  • resolve warning in py3.11 related to docstring
  • remove graph break for unpack
  • breaking TF layers were updated to follow new instructions, new layers compatible with TF 2.16, and not compatible with old TF (certainly does not work with TF2.13)

What's Changed

New Contributors

Full Changelog: arogozhnikov/einops@v0.7.0...v0.8.0

Commits

Updates mkl-include from 2023.2.0 to 2024.2.1

Updates mkl from 2023.2.0 to 2024.2.1

Commits

Updates onnxruntime-extensions from 0.10.1 to 0.11.0

Release notes

Sourced from onnxruntime-extensions's releases.

v0.11.0

What's changed

  • Created Java packaging pipeline and published to Maven repository.
  • Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
  • Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
  • Fixed Whisper large model pre-processing bug.
  • Enabled eager execution for custom operator and refactored the header file structure.

Contributions

Contributors to ONNX Runtime Extensions include members across teams at Microsoft, along with our community members: @​sayanshaw24 @​wenbingl @​skottmckay @​natke @​hariharans29 @​jslhcl @​snnn @​kazssym @​YUNQIUGUO @​souptc @​yihonglyu

Commits
  • 8d8670f Fix the Linux and MacOS wheel build for packaging issues (#727)
  • b988f0d Update onebranch-windows-build-stage.yml
  • b1989b7 Revert net7.0 update for now (#701) (#712)
  • 1f31d33 Eager mode: cuda kernel support (#694)
  • 627e93a fix version in renaming (#692)
  • f9290e8 Add a status class for future tokenizer API implementation (#690)
  • 6464627 Refactor the header file directory and integrate the eager tensor implementat...
  • fe8cd9e Add extensions catalyst support (#684)
  • a96ed42 Update ext_java.cmake (#688)
  • 00a594f Standardize the inputs for ONNX STFT op for Whisper model (#681)
  • Additional commits viewable in compare view

Updates onnxruntime from 1.17.3 to 1.18.1

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.18.1

What's new?

Announcements:

  • ONNX Runtime Python packages now have numpy dependency >=1.21.6, <2.0. Support for numpy 2.0 will be added in a future release.
  • CUDA 12.x ONNX Runtime GPU packages are now built against cuDNN 9.x (1.18.0 packages previously depended on cuDNN 8.x). CUDA 11.x ONNX Runtime GPU packages continue to depend on CuDNN 8.x.
  • Windows packages require installation of Microsoft Visual C++ Redistributable Runtime 14.38 or newer.

TensorRT EP:

  • TensorRT Weightless API integration.
  • Support for TensorRT hardware compatible engines.
  • Support for INT64 types in TensorRT constant layer calibration.
  • Now using latest commit of onnx-tensorrt parser, which includes several issue fixes.
  • Additional TensorRT support and performance improvements.

Packages:

  • Publish CUDA 12 Java packages to Azure DevOps feed.
  • Various packaging pipeline fixes.

This patch release also features various other bug fixes, including a CUDA 12.5 build error fix.

Big thank you to @​yf711 for driving this release as the release manager and to all our contributors!

@​yf711 @​jchen351 @​mszhanyi @​snnn @​wangyems @​jywu-msft @​skottmckay @​chilo-ms @​moraxu @​kevinch-nv @​pengwa @​wejoncy @​pranavsharma @​Craigacp @​jslhcl @​adrianlizarraga @​inisis @​jeffbloo @​mo-ja @​kunal-vaishnavi @​sumitsays @​neNasko1 @​yufenglee @​dhruvbird @​wangshuai09 @​xiaoyu-work @​axinging @​yuslepukhin @​YUNQIUGUO @​shubhambhokare1 @​fs-eire @​afantino951 @​tboby @​HectorSVC @​baijumeswani

ONNX Runtime v1.18.0

Announcements

  • Windows ARM32 support has been dropped at the source code level.
  • Python version >=3.8 is now required for build.bat/build.sh (previously >=3.7). Note: If you have Python version <3.8, you can bypass the tools and use CMake directly.
  • The onnxruntime-mobile Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods are being deprecated. Please use the onnxruntime-android Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support ONNX and ORT format models and all operators and data types. Note: If you require a smaller binary size, a custom build is required. See details on creating a custom Android or iOS package on Custom build | onnxruntime.

Build System & Packages

  • CoreML execution provider now depends on coremltools.
  • Flatbuffers has been upgraded from 1.12.0 → 23.5.26.
  • ONNX has been upgraded from 1.15 → 1.16.
  • EMSDK has been upgraded from 3.1.51 → 3.1.57.
  • Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with several important bug fixes.
  • There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX Runtime CUDA execution provider without any operations apart from memcpy ops.
  • Added support for Catalyst for macOS build support.
  • Added initial support for RISC-V and three new build options for it: --rv64, --riscv_toolchain_root, and --riscv_qemu_path.
  • Now you can build TensorRT EP with protobuf-lite instead of the full version of protobuf.
  • Some security-related compile/link flags have been moved from the default setting → new build option: --use_binskim_compliant_compile_flags. Note: All our release binaries are built with this flag, but when building ONNX Runtime from source, this flag is default OFF.
  • Windows ARM64 build now depends on PyTorch CPUINFO library.
  • Windows OneCore build now uses “Reverse forwarding” apisets instead of “Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on kernel32.dll. Note: Windows systems without kernel32.dll need to have reverse forwarders (see API set loader operation - Win32 apps | Microsoft Learn for more information).

Core

  • Added ONNX 1.16 support.
  • Added additional optimizations related to Dynamo-exported models.
  • Improved testing infrastructure for EPs developed as shared libraries.
  • Exposed Reserve() in OrtAllocator to allow custom allocators to work when session.use_device_allocator_for_initializers is specified.

... (truncated)

Commits
  • 3871274 [ORT 1.18.1 Release] Update ORT numpy dependency to >=1.21.6,<2.0 (#21141)
  • d0aee20 [ORT 1.18.1 Release] Cherry pick 3rd round (#21129)
  • 8bfcf14 [ORT 1.18.1 Release] update 1.18.1 patch release version (#21143)
  • 25ab935 [ORT 1.18.1 Release] Cherry pick 2nd round (#21111)
  • 91fb865 [ORT 1.18.1 Release] Cherry pick 1st round (#21105)
  • 4573740 [ORT 1.18.0 Release] Cherry pick 3rd/Final round (#20677)
  • ed349b9 Mark end of version 17 and 18 C API (#20671)
  • d72b476 [ORT 1.18.0 Release] Cherry pick 2nd round (#20620)
  • 65f3fbf [ORT 1.18.0 Release] Cherry pick 1st round (#20585)
  • 204f1f5 Run fuzz testing before the CG task cleans up the build directory (#20500) (#...
  • Additional commits viewable in compare view

Updates peft from 0.11.1 to 0.12.0

Release notes

Sourced from peft's releases.

v0.12.0: New methods OLoRA, X-LoRA, FourierFT, HRA, and much more

Highlights

peft-v0 12 0

New methods

OLoRA

@​tokenizer-decode added support for a new LoRA initialization strategy called OLoRA (#1828). With this initialization option, the LoRA weights are initialized to be orthonormal, which promises to improve training convergence. Similar to PiSSA, this can also be applied to models quantized with bitsandbytes. Check out the accompanying OLoRA examples.

X-LoRA

@​EricLBuehler added the X-LoRA method to PEFT (#1491). This is a mixture of experts approach that combines the strength of multiple pre-trained LoRA adapters. Documentation has yet to be added but check out the X-LoRA tests for how to use it.

FourierFT

@​Phoveran, @​zqgao22, @​Chaos96, and @​DSAILatHKUST added discrete Fourier transform fine-tuning to PEFT (#1838). This method promises to match LoRA in terms of performance while reducing the number of parameters even further. Check out the included FourierFT notebook.

HRA

@​DaShenZi721 added support for Householder Reflection Adaptation (#1864). This method bridges the gap between low rank adapters like LoRA on the one hand and orthogonal fine-tuning techniques such as OFT and BOFT on the other. As such, it is interesting for both LLMs and image generation models. Check out the HRA example on how to perform DreamBooth fine-tuning.

Enhancements

  • IA³ now supports merging of multiple adapters via the add_weighted_adapter method thanks to @​alexrs (#1701).
  • Call peft_model.get_layer_status() and peft_model.get_model_status() to get an overview of the layer/model status of the PEFT model. This can be especially helpful when dealing with multiple adapters or for debugging purposes. More information can be found in the docs (#1743).
  • DoRA now supports FSDP training, including with bitsandbytes quantization, aka QDoRA ()#1806).
  • VeRA has been extended by @​dkopi to support targeting layers with different weight shapes (#1817).
  • @​kallewoof added the possibility for ephemeral GPU offloading. For now, this is only implemented for loading DoRA models, which can be sped up considerably for big models at the cost of a bit of extra VRAM (#1857).
  • Experimental: It is now possible to tell PEFT to use your custom LoRA layers through dynamic dispatching. Use this, for instance, to add LoRA layers for thus far unsupported layer types without the need to first create a PR on PEFT (but contributions are still welcome!) (#1875).

Examples

Changes

Casting of the adapter dtype

Important: If the base model is loaded in float16 (fp16) or bfloat16 (bf16), PEFT now autocasts adapter weights to float32 (fp32) instead of using the dtype of the base model (#1706). This requires more memory than previously but stabilizes training, so it's the more sensible default. To prevent this, pass autocast_adapter_dtype=False when calling get_peft_model, PeftModel.from_pretrained, or PeftModel.load_adapter.

Adapter device placement

The logic of device placement when loading multiple adapters on the same model has been changed (#1742). Previously, PEFT would move all adapters to the device of the base model. Now, only the newly loaded/created adapter is moved to the base model's device. This allows users to have more fine-grained control over the adapter devices, e.g. allowing them to offload unused adapters to CPU more easily.

PiSSA

... (truncated)

Commits
  • e6cd24c Release v0.12.0 (#1946)
  • 05f57e9 PiSSA, OLoRA: Delete initial adapter after conversion instead of the active a...
  • 2ce83e0 FIX Decrease memory overhead of merging (#1944)
  • ebcd079 [WIP] ENH Add support for Qwen2 (#1906)
  • ba75bb1 FIX: More VeRA tests, fix tests, more checks (#1900)
  • 6472061 FIX Prefix tuning Grouped-Query Attention (#1901)
  • e02b938 FIX PiSSA & OLoRA with rank/alpha pattern, rslora (#1930)
  • 5268495 FEAT Add HRA: Householder Reflection Adaptation (#1864)
  • 2aaf9ce ENH Sync LoRA tp_layer methods with vanilla LoRA (#1919)
  • a019f86 FIX sft script print_trainable_parameters attr lookup (#1928)
  • Additional commits viewable in compare view

Updates protobuf from 4.24.4 to 5.27.3

Commits
  • 7cc670c Updating version.json and repo version numbers to: 27.3
  • 67d7298 Merge pull request #17617 from protocolbuffers/cp-utf8-ascii
  • e20cb7a Remove /utf-8 flag added in #14197
  • c9839cb Merge pull request #17473 from protocolbuffers/cp-revert-hack
  • 8a579c1 Downgrade CMake to 3.29 to workaround Abseil issue.
  • ba3e7d7 Revert workaround for std::mutex issues on github windows runners.
  • 861be78 Merge pull request #17331 from protocolbuffers/cp-cp
  • c1ec82f Merge pull request #17232 from simonberger/bugfix/php-ext-persistent-global-c...
  • aec8a76 Upgrade macos-11 tests to macos-12
  • 4e3b4f0 Use explicit names of our large runners
  • Additional commits viewable in compare view

Updates psutil from 5.9.5 to 6.0.0

Changelog

Sourced from psutil's changelog.

6.0.0

2024-06-18

Enhancements

  • 2109_: maxfile and maxpath fields were removed from the namedtuple returned by disk_partitions()_. Reason: on network filesystems (NFS) this can potentially take a very long time to complete.
  • 2366_, [Windows]: log debug message when using slower process APIs.
  • 2375_, [macOS]: provide arm64 wheels. (patch by Matthieu Darbois)
  • 2396_: process_iter()_ no longer pre-emptively checks whether PIDs have been reused. This makes process_iter()_ around 20x times faster.
  • 2396_: a new psutil.process_iter.cache_clear() API can be used the clear process_iter()_ internal cache.
  • 2401_, Support building with free-threaded CPython 3.13. (patch by Sam Gross)
  • 2407_: Process.connections()_ was renamed to Process.net_connections()_. The old name is still available, but it's deprecated (triggers a DeprecationWarning) and will be removed in the future.
  • 2425_: [Linux]: provide aarch64 wheels. (patch by Matthieu Darbois / Ben Raz)

Bug fixes

  • 2250_, [NetBSD]: Process.cmdline()_ sometimes fail with EBUSY. It usually happens for long cmdlines with lots of arguments. In this case retry getting the cmdline for up to 50 times, and return an empty list as last resort.
  • 2254_, [Linux]: offline cpus raise NotImplementedError in cpu_freq() (patch by Shade Gladden)
  • 2272_: Add pickle support to psutil Exceptions.
  • 2359_, [Windows], [CRITICAL]: pid_exists()_ disagrees with Process_ on whether a pid exists when ERROR_ACCESS_DENIED.
  • 2360_, [macOS]: can't compile on macOS < 10.13. (patch by Ryan Schmidt)
  • 2362_, [macOS]: can't compile on macOS 10.11. (patch by Ryan Schmidt)
  • 2365_, [macOS]: can't compile on macOS < 10.9. (patch by Ryan Schmidt)
  • 2395_, [OpenBSD]: pid_exists()_ erroneously return True if the argument is a thread ID (TID) instead of a PID (process ID).
  • 2412_, [macOS]: can't compile on macOS 10.4 PowerPC due to missing MNT_ constants.

Porting notes

Version 6.0.0 introduces some changes which affect backward compatibility:

  • 2109_: the namedtuple returned by disk_partitions()_' no longer has maxfile and maxpath fields.
  • 2396_: process_iter()_ no longer pre-emptively checks whether PIDs have been reused. If you want to check for PID reusage you are supposed to use Process.is_running()_ against the yielded Process_ instances. That will also automatically remove reused PIDs from process_iter()_ internal cache.

... (truncated)

Commits
  • 3d5522a release
  • 5b30ef4 Add aarch64 manylinux wheels (#2425)
  • 1d092e7 test subprocesses: sleep() with an interval of 0.1 to make the test process m...
  • 5f80c12 Fix #2412, [macOS]: can't compile on macOS 10.4 PowerPC due to missing MNT_...
  • 89b6096 process_iter(): use another global var to keep track of reused PIDs
  • 9421bf8 openbsd: skip test if cmdline() returns [] due to EBUSY
  • 4b1a054 Fix #2250 / NetBSD / cmdline: retry on EBUSY. (#2421)
  • 20be5ae ruff: enable and fix 'unused variable' rule
  • 5530985 chore(ci): update actions (#2417)
  • 1c7cb0a Don't build with limited API for 3.13 free-threaded build (#2402)
  • Additional commits viewable in compare view

Updates tokenizers from 0.19.1 to 0.20.0

Release notes

Sourced from tokenizers's releases.

Release v0.20.0: faster encode, better python support

Release v0.20.0

This release is focused on performances and user experience.

Performances:

First off, we did a bit of benchmarking, and found some place for improvement for us! With a few minor changes (mostly #1587) here is what we get on Llama3 running on a g6 instances on AWS https://github.com/huggingface/tokenizers/blob/main/bindings/python/benches/test_tiktoken.py : image

Python API

We shipped better deserialization errors in general, and support for __str__ and __repr__ for all the object. This allows for a lot easier debugging see this:

>>> from tokenizers import Tokenizer;
>>> tokenizer = Tokenizer.from_pretrained("bert-base-uncased");
>>> print(tokenizer)
Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, ...}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", max_input_chars_per_word=100, vocab={"[PAD]":0, "[unused0]":1, "[unused1]":2, "[unused2]":3, "[unused3]":4, ...}))
>>> tokenizer
Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", max_input_chars_per_word=100, vocab={"[PAD]":0, "[unused0]":1, "[unused1]":2, ...}))

The pre_tokenizer.Sequence and normalizer.Sequence are also more accessible now:

from tokenizers import normalizers
norm = normalizers.Sequence([normalizers.Strip(), normalizers.BertNormalizer()])
norm[0]
norm[1].lowercase=False

What's Changed

Bumps the genai-workflow group with 11 updates in the /workflows/charts/huggingface-llm directory:

| Package | From | To |
| --- | --- | --- |
| [accelerate](https://github.com/huggingface/accelerate) | `0.30.1` | `0.33.0` |
| [datasets](https://github.com/huggingface/datasets) | `2.19.0` | `2.20.0` |
| [einops](https://github.com/arogozhnikov/einops) | `0.7.0` | `0.8.0` |
| [mkl-include](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl.html) | `2023.2.0` | `2024.2.1` |
| [mkl](https://github.com/oneapi-src/oneMKL) | `2023.2.0` | `2024.2.1` |
| [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) | `0.10.1` | `0.11.0` |
| [onnxruntime](https://github.com/microsoft/onnxruntime) | `1.17.3` | `1.18.1` |
| [peft](https://github.com/huggingface/peft) | `0.11.1` | `0.12.0` |
| [protobuf](https://github.com/protocolbuffers/protobuf) | `4.24.4` | `5.27.3` |
| [psutil](https://github.com/giampaolo/psutil) | `5.9.5` | `6.0.0` |
| [tokenizers](https://github.com/huggingface/tokenizers) | `0.19.1` | `0.20.0` |



Updates `accelerate` from 0.30.1 to 0.33.0
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](huggingface/accelerate@v0.30.1...v0.33.0)

Updates `datasets` from 2.19.0 to 2.20.0
- [Release notes](https://github.com/huggingface/datasets/releases)
- [Commits](huggingface/datasets@2.19.0...2.20.0)

Updates `einops` from 0.7.0 to 0.8.0
- [Release notes](https://github.com/arogozhnikov/einops/releases)
- [Commits](arogozhnikov/einops@v0.7.0...v0.8.0)

Updates `mkl-include` from 2023.2.0 to 2024.2.1

Updates `mkl` from 2023.2.0 to 2024.2.1
- [Release notes](https://github.com/oneapi-src/oneMKL/releases)
- [Commits](https://github.com/oneapi-src/oneMKL/commits)

Updates `onnxruntime-extensions` from 0.10.1 to 0.11.0
- [Release notes](https://github.com/microsoft/onnxruntime-extensions/releases)
- [Commits](microsoft/onnxruntime-extensions@v0.10.1...v.0.11.0)

Updates `onnxruntime` from 1.17.3 to 1.18.1
- [Release notes](https://github.com/microsoft/onnxruntime/releases)
- [Changelog](https://github.com/microsoft/onnxruntime/blob/main/docs/ReleaseManagement.md)
- [Commits](microsoft/onnxruntime@v1.17.3...v1.18.1)

Updates `peft` from 0.11.1 to 0.12.0
- [Release notes](https://github.com/huggingface/peft/releases)
- [Commits](huggingface/peft@v0.11.1...v0.12.0)

Updates `protobuf` from 4.24.4 to 5.27.3
- [Release notes](https://github.com/protocolbuffers/protobuf/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf/blob/main/protobuf_release.bzl)
- [Commits](protocolbuffers/protobuf@v4.24.4...v5.27.3)

Updates `psutil` from 5.9.5 to 6.0.0
- [Changelog](https://github.com/giampaolo/psutil/blob/master/HISTORY.rst)
- [Commits](giampaolo/psutil@release-5.9.5...release-6.0.0)

Updates `tokenizers` from 0.19.1 to 0.20.0
- [Release notes](https://github.com/huggingface/tokenizers/releases)
- [Changelog](https://github.com/huggingface/tokenizers/blob/main/RELEASE.md)
- [Commits](huggingface/tokenizers@v0.19.1...v0.20.0)

---
updated-dependencies:
- dependency-name: accelerate
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: datasets
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: einops
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: mkl-include
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: mkl
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: onnxruntime-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: onnxruntime
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: peft
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: protobuf
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: psutil
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: tokenizers
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Aug 12, 2024
Copy link

Dependency Review

The following issues were found:
  • ✅ 0 vulnerable package(s)
  • ✅ 0 package(s) with incompatible licenses
  • ✅ 0 package(s) with invalid SPDX license definitions
  • ⚠️ 2 package(s) with unknown licenses.
See the Details below.

License Issues

workflows/charts/huggingface-llm/requirements.txt

PackageVersionLicenseIssue Type
mkl-include2024.2.1NullUnknown License
mkl2024.2.1NullUnknown License

OpenSSF Scorecard

Scorecard details
PackageVersionScoreDetails
pip/accelerate 0.33.0 🟢 6.3
Details
CheckScoreReason
Code-Review🟢 9Found 29/30 approved changesets -- score normalized to 9
Maintained🟢 1030 commit(s) and 17 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Binary-Artifacts🟢 10no binaries found in the repo
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Security-Policy⚠️ 0security policy file not detected
Signed-Releases⚠️ -1no releases found
Fuzzing⚠️ 0project is not fuzzed
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Vulnerabilities🟢 100 existing vulnerabilities detected
Packaging🟢 10packaging workflow detected
SAST🟢 4SAST tool is not run on all commits -- score normalized to 4
pip/datasets 2.20.0 🟢 5.8
Details
CheckScoreReason
Code-Review🟢 3Found 9/30 approved changesets -- score normalized to 3
Maintained🟢 1030 commit(s) and 13 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Signed-Releases⚠️ -1no releases found
Security-Policy🟢 10security policy file detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Packaging⚠️ -1packaging workflow not detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Binary-Artifacts🟢 10no binaries found in the repo
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Vulnerabilities🟢 100 existing vulnerabilities detected
Fuzzing⚠️ 0project is not fuzzed
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/einops 0.8.0 🟢 4.4
Details
CheckScoreReason
Maintained🟢 42 commit(s) and 3 issue activity found in the last 90 days -- score normalized to 4
Code-Review⚠️ 2Found 4/20 approved changesets -- score normalized to 2
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Packaging⚠️ -1packaging workflow not detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Binary-Artifacts🟢 10no binaries found in the repo
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Security-Policy⚠️ 0security policy file not detected
Vulnerabilities🟢 100 existing vulnerabilities detected
Fuzzing⚠️ 0project is not fuzzed
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/mkl 2024.2.1 UnknownUnknown
pip/mkl-include 2024.2.1 UnknownUnknown
pip/onnxruntime 1.18.1 🟢 6.8
Details
CheckScoreReason
Code-Review🟢 10all last 30 commits are reviewed through GitHub
Maintained🟢 1030 commit(s) out of 30 and 8 issue activity out of 30 found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no badge detected
Vulnerabilities🟢 10no vulnerabilities detected
Signed-Releases⚠️ 00 out of 5 artifacts are signed or have provenance
Branch-Protection🟢 8branch protection is not maximal on development and all release branches
Security-Policy🟢 10security policy file detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Packaging⚠️ -1no published package detected
License🟢 10license file detected
Token-Permissions⚠️ 0non read-only tokens detected in GitHub workflows
Dependency-Update-Tool🟢 10update tool detected
Binary-Artifacts🟢 10no binaries found in the repo
Fuzzing⚠️ 0project is not fuzzed
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
pip/onnxruntime-extensions 0.11.0 🟢 6.1
Details
CheckScoreReason
Code-Review🟢 9Found 29/30 approved changesets -- score normalized to 9
Maintained🟢 1030 commit(s) and 9 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Packaging⚠️ -1packaging workflow not detected
Security-Policy🟢 10security policy file detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
Fuzzing⚠️ 0project is not fuzzed
Binary-Artifacts🟢 7binaries present in source code
Vulnerabilities🟢 100 existing vulnerabilities detected
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
pip/peft 0.12.0 UnknownUnknown
pip/protobuf 5.27.3 🟢 7
Details
CheckScoreReason
Binary-Artifacts🟢 10no binaries found in the repo
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
CI-Tests🟢 927 out of 28 merged PRs checked by a CI test -- score normalized to 9
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Code-Review⚠️ 0found 29 unreviewed changesets out of 30 -- score normalized to 0
Contributors🟢 1013 different organizations found -- score normalized to 10
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Dependency-Update-Tool🟢 10update tool detected
Fuzzing🟢 10project is fuzzed
License🟢 9license file detected
Maintained🟢 1030 commit(s) out of 30 and 4 issue activity out of 30 found in the last 90 days -- score normalized to 10
Packaging⚠️ -1no published package detected
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
SAST🟢 5SAST tool is not run on all commits -- score normalized to 5
Security-Policy🟢 10security policy file detected
Signed-Releases⚠️ 00 out of 5 artifacts are signed or have provenance
Token-Permissions🟢 10GitHub workflow tokens follow principle of least privilege
Vulnerabilities🟢 73 existing vulnerabilities detected
pip/psutil 6.0.0 🟢 5.8
Details
CheckScoreReason
Maintained🟢 1014 commit(s) and 13 issue activity found in the last 90 days -- score normalized to 10
Code-Review⚠️ 2Found 8/30 approved changesets -- score normalized to 2
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Packaging⚠️ -1packaging workflow not detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Security-Policy🟢 10security policy file detected
Branch-Protection⚠️ 0branch protection not enabled on development/release branches
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Binary-Artifacts🟢 10no binaries found in the repo
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Vulnerabilities🟢 100 existing vulnerabilities detected
Fuzzing🟢 10project is fuzzed
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/tokenizers 0.20.0 🟢 5.5
Details
CheckScoreReason
Code-Review🟢 8Found 22/27 approved changesets -- score normalized to 8
Maintained🟢 1030 commit(s) and 24 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Signed-Releases⚠️ -1no releases found
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Binary-Artifacts🟢 10no binaries found in the repo
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Security-Policy⚠️ 0security policy file not detected
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Fuzzing⚠️ 0project is not fuzzed
Packaging🟢 10packaging workflow detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
Vulnerabilities🟢 55 existing vulnerabilities detected

Scanned Manifest Files

workflows/charts/huggingface-llm/requirements.txt
  • accelerate@0.33.0
  • datasets@2.20.0
  • einops@0.8.0
  • mkl@2024.2.1
  • mkl-include@2024.2.1
  • onnxruntime@1.18.1
  • onnxruntime-extensions@0.11.0
  • peft@0.12.0
  • protobuf@5.27.3
  • psutil@6.0.0
  • tokenizers@0.20.0
  • accelerate@0.30.1
  • datasets@2.19.0
  • einops@0.7.0
  • mkl@2023.2.0
  • mkl-include@2023.2.0
  • onnxruntime@1.17.3
  • onnxruntime-extensions@0.10.1
  • peft@0.11.1
  • protobuf@4.24.4
  • psutil@5.9.5
  • tokenizers@0.19.1

Copy link
Contributor Author

dependabot bot commented on behalf of github Aug 19, 2024

Superseded by #317.

@dependabot dependabot bot closed this Aug 19, 2024
@dependabot dependabot bot deleted the dependabot/pip/workflows/charts/huggingface-llm/genai-workflow-9c4761c835 branch August 19, 2024 13:27
jitendra42 pushed a commit to jitendra42/ai-containers that referenced this pull request Oct 23, 2024
…uirements group (intel#298)

Bump mkdocs-material in /docs in the python-requirements group

Bumps the python-requirements group in /docs with 1 update: [mkdocs-material](https://github.com/squidfunk/mkdocs-material).


Updates `mkdocs-material` from 9.5.21 to 9.5.22
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](squidfunk/mkdocs-material@9.5.21...9.5.22)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: python-requirements
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants