Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump the genai-workflow group across 1 directory with 11 updates #349

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Sep 2, 2024

Bumps the genai-workflow group with 11 updates in the /workflows/charts/huggingface-llm directory:

Package From To
accelerate 0.30.1 0.33.0
datasets 2.19.0 2.21.0
einops 0.7.0 0.8.0
mkl-include 2023.2.0 2024.2.1
mkl 2023.2.0 2024.2.1
onnxruntime-extensions 0.10.1 0.12.0
onnxruntime 1.17.3 1.19.0
peft 0.11.1 0.12.0
protobuf 4.24.4 5.28.0
psutil 5.9.5 6.0.0
tokenizers 0.19.1 0.20.0

Updates accelerate from 0.30.1 to 0.33.0

Release notes

Sourced from accelerate's releases.

v0.33.0: MUSA backend support and bugfixes

MUSA backend support and bugfixes

Small release this month, with key focuses on some added support for backends and bugs:

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v0.32.1...v0.33.0

v0.32.0: Profilers, new hooks, speedups, and more!

Core

  • Utilize shard saving from the huggingface_hub rather than our own implementation (huggingface/accelerate#2795)
  • Refactor logging to use logger in dispatch_model (huggingface/accelerate#2855)
  • The Accelerator.step number is now restored when using save_state and load_state (huggingface/accelerate#2765)
  • A new profiler has been added allowing users to collect performance metrics during model training and inference, including detailed analysis of execution time and memory consumption. These can then be generated in Chrome's tracing tool. Read more about it here (huggingface/accelerate#2883)
  • Reduced import times for doing import accelerate and any other major core import by 68%, now should be only slightly longer than doing import torch (huggingface/accelerate#2845)
  • Fixed a bug in get_backend and added a clear_device_cache utility (huggingface/accelerate#2857)

Distributed Data Parallelism

FSDP

  • If the output directory doesn't exist when using accelerate merge-weights, one will be automatically created (huggingface/accelerate#2854)

... (truncated)

Commits
  • 28a3b98 Release: v0.33.0
  • 415eddf feat(ci): add pip caching in CI (#2952)
  • 2308576 Properly handle Params4bit in set_module_tensor_to_device (#2934)
  • a5a3e57 Add torch.float8_e4m3fn format dtype_byte_size (#2945)
  • 0af1d8b delete CCL env var setting (#2927)
  • d16d737 Improve test reliability for Accelerator.free_memory() (#2935)
  • 7a5c231 Consider pynvml available when installed through the nvidia-ml-py distributio...
  • 4f02bb7 Fix import test (#2931)
  • 709fd1e Hotfix PyTorch Version Installation in CI Workflow for Minimum Version Matrix...
  • f4f1260 Correct loading of models with shared tensors when using accelerator.load_sta...
  • Additional commits viewable in compare view

Updates datasets from 2.19.0 to 2.21.0

Release notes

Sourced from datasets's releases.

2.21.0

Features

  • Support pyarrow large_list by @​albertvillanova in huggingface/datasets#7019
    • Support Polars round trip:
      import polars as pl
      from datasets import Dataset
      df1 = pl.from_dict({"col_1": [[1, 2], [3, 4]]}
      df2 = Dataset.from_polars(df).to_polars()
      assert df1.equals(df2)

What's Changed

... (truncated)

Commits

Updates einops from 0.7.0 to 0.8.0

Release notes

Sourced from einops's releases.

v0.8.0: tinygrad, small fixes and updates

TLDR

  • tinygrad backend added
  • resolve warning in py3.11 related to docstring
  • remove graph break for unpack
  • breaking TF layers were updated to follow new instructions, new layers compatible with TF 2.16, and not compatible with old TF (certainly does not work with TF2.13)

What's Changed

New Contributors

Full Changelog: arogozhnikov/einops@v0.7.0...v0.8.0

Commits

Updates mkl-include from 2023.2.0 to 2024.2.1

Updates mkl from 2023.2.0 to 2024.2.1

Commits

Updates onnxruntime-extensions from 0.10.1 to 0.12.0

Release notes

Sourced from onnxruntime-extensions's releases.

v0.12.0

What's Changed

  • Added C APIs for language, vision and audio processors including new FeatureExtractor for Whisper model
  • Support for Phi-3 Small Tokenizer and new OpenAI tiktoken format for fast loading of BPE tokenizers
  • Added new CUDA custom operators such as MulSigmoid, Transpose2DCast, ReplaceZero, AddSharedInput and MulSharedInput
  • Enhanced Custom Op Lite API on GPU and fused kernels for DORT
  • Bug fixes, including null bos_token for Qwen2 tokenizer and SentencePiece converted FastTokenizer issue on non-ASCII characters, as well as necessary updates for MSVC 19.40 and numpy 2.0 release

New Contributors

Full Changelog: microsoft/onnxruntime-extensions@v.0.11.0...v0.12.0

v0.11.0

What's changed

  • Created Java packaging pipeline and published to Maven repository.
  • Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
  • Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
  • Fixed Whisper large model pre-processing bug.
  • Enabled eager execution for custom operator and refactored the header file structure.

Contributions

Contributors to ONNX Runtime Extensions include members across teams at Microsoft, along with our community members: @​sayanshaw24 @​wenbingl @​skottmckay @​natke @​hariharans29 @​jslhcl @​snnn @​kazssym @​YUNQIUGUO @​souptc @​yihonglyu

Commits
  • cb47d2c Update nuget extraction path for iOS xcframework (#792)
  • b27fbbe Update macosx framework packaging to follow apple guidelines (#776) (#789)
  • c7a2d45 Update build-package-for-windows.yml (#784)
  • 3ce1e9f Upgrade ESRP signing task from v2 to v5 (#780)
  • e113ed3 removed OpenAIAudioToText from config (#777)
  • c9c11b4 Fix the windows API missing issue and Linux shared library size issue for Jav...
  • c3145b8 add the decoder_prompt_id for whisper tokenizer (#775)
  • 620050f reimplement resize cpu kernel for image processing (#768)
  • d79299e increase timeout (#773)
  • 735041e increase timeout (#772)
  • Additional commits viewable in compare view

Updates onnxruntime from 1.17.3 to 1.19.0

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.19

Announcements

  • Training (pypi) packages are delayed from package manager release due to some publishing errors. Feel free to contact @​maanavd if you need release candidates for some workflows ASAP. In the meantime, binaries are attached to this post. This message will be deleted once this ceases to be the case. Thanks for your understanding :)
  • Second note that the wrong commit was initially tagged with v1.19.0. The final commit has since been correctly tagged: microsoft/onnxruntime@26250ae. This shouldn't effect much, but sorry for the inconvenience!

Build System & Packages

  • Numpy support for 2.x has been added
  • Qualcomm SDK has been upgraded to 2.25
  • ONNX has been upgraded from 1.16 → 1.16.1
  • Default GPU packages use CUDA 12.x and Cudnn 9.x (previously CUDA 11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS feed.
  • TensorRT 10.2 support added
  • Introduced Java CUDA 12 packages on Maven.
  • Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 2024)
  • Discontinued support for macOS 11 and increasing the minimum supported macOS version to 12. (macOS 11 reached EOL in September 2023)
  • Discontinued support for iOS 12 and increasing the minimum supported iOS version to 13.

Core

Performance

  • Added QDQ support for INT4 quantization in CPU and CUDA Execution Providers
  • Implemented FlashAttention on CPU to improve performance for GenAI prompt cases
  • Improved INT4 performance on CPU (X64, ARM64) and NVIDIA GPUs

Execution Providers

  • TensorRT

    • Updated to support TensorRT 10.2
    • Remove calls to deprecated api’s
    • Enable refittable embedded engine when ONNX model provided as byte stream
  • CUDA

    • Upgraded cutlass to 3.5.0 for performance improvement of memory efficient attention.
    • Updated MultiHeadAttention and Attention operators to be thread-safe.
    • Added sdpa_kernel provider option to choose kernel for Scaled Dot-Product Attention.
    • Expanded op support - Tile (bf16)
  • CPU

    • Expanded op support - GroupQueryAttention, SparseAttention (for Phi-3 small)
  • QNN

    • Updated to support QNN SDK 2.25
    • Expanded op support - HardSigmoid, ConvTranspose 3d, Clip (int32 data), Matmul (int4 weights), Conv (int4 weights), prelu (fp16)
    • Expanded fusion support – Conv + Clip/Relu fusion
  • OpenVINO

    • Added support for OpenVINO 2024.3
    • Support for enabling EpContext using session options
  • DirectML

... (truncated)

Commits

Updates peft from 0.11.1 to 0.12.0

Release notes

Sourced from peft's releases.

v0.12.0: New methods OLoRA, X-LoRA, FourierFT, HRA, and much more

Highlights

peft-v0 12 0

New methods

OLoRA

@​tokenizer-decode added support for a new LoRA initialization strategy called OLoRA (#1828). With this initialization option, the LoRA weights are initialized to be orthonormal, which promises to improve training convergence. Similar to PiSSA, this can also be applied to models quantized with bitsandbytes. Check out the accompanying OLoRA examples.

X-LoRA

@​EricLBuehler added the X-LoRA method to PEFT (#1491). This is a mixture of experts approach that combines the strength of multiple pre-trained LoRA adapters. Documentation has yet to be added but check out the X-LoRA tests for how to use it.

FourierFT

@​Phoveran, @​zqgao22, @​Chaos96, and @​DSAILatHKUST added discrete Fourier transform fine-tuning to PEFT (#1838). This method promises to match LoRA in terms of performance while reducing the number of parameters even further. Check out the included FourierFT notebook.

HRA

@​DaShenZi721 added support for Householder Reflection Adaptation (#1864). This method bridges the gap between low rank adapters like LoRA on the one hand and orthogonal fine-tuning techniques such as OFT and BOFT on the other. As such, it is interesting for both LLMs and image generation models. Check out the HRA example on how to perform DreamBooth fine-tuning.

Enhancements

  • IA³ now supports merging of multiple adapters via the add_weighted_adapter method thanks to @​alexrs (#1701).
  • Call peft_model.get_layer_status() and peft_model.get_model_status() to get an overview of the layer/model status of the PEFT model. This can be especially helpful when dealing with multiple adapters or for debugging purposes. More information can be found in the docs (#1743).
  • DoRA now supports FSDP training, including with bitsandbytes quantization, aka QDoRA ()#1806).
  • VeRA has been extended by @​dkopi to support targeting layers with different weight shapes (#1817).
  • @​kallewoof added the possibility for ephemeral GPU offloading. For now, this is only implemented for loading DoRA models, which can be sped up considerably for big models at the cost of a bit of extra VRAM (#1857).
  • Experimental: It is now possible to tell PEFT to use your custom LoRA layers through dynamic dispatching. Use this, for instance, to add LoRA layers for thus far unsupported layer types without the need to first create a PR on PEFT (but contributions are still welcome!) (#1875).

Examples

Changes

Casting of the adapter dtype

Important: If the base model is loaded in float16 (fp16) or bfloat16 (bf16), PEFT now autocasts adapter weights to float32 (fp32) instead of using the dtype of the base model (#1706). This requires more memory than previously but stabilizes training, so it's the more sensible default. To prevent this, pass autocast_adapter_dtype=False when calling get_peft_model, PeftModel.from_pretrained, or PeftModel.load_adapter.

Adapter device placement

The logic of device placement when loading multiple adapters on the same model has been changed (#1742). Previously, PEFT would move all adapters to the device of the base model. Now, only the newly loaded/created adapter is moved to the base model's device. This allows users to have more fine-grained control over the adapter devices, e.g. allowing them to offload unused adapters to CPU more easily.

PiSSA

... (truncated)

Commits
  • e6cd24c Release v0.12.0 (#1946)
  • 05f57e9 PiSSA, OLoRA: Delete initial adapter after conversion instead of the active a...
  • 2ce83e0 FIX Decrease memory overhead of merging (#1944)
  • ebcd079 [WIP] ENH Add support for Qwen2 (#1906)
  • ba75bb1 FIX: More VeRA tests, fix tests, more checks (#1900)
  • 6472061 FIX Prefix tuning Grouped-Query Attention (#1901)
  • e02b938 FIX PiSSA & OLoRA with rank/alpha pattern, rslora (#1930)
  • 5268495 FEAT Add HRA: Householder Reflection Adaptation (#1864)
  • 2aaf9ce ENH Sync LoRA tp_layer methods with vanilla LoRA (#1919)
  • a019f86 FIX sft script print_trainable_parameters attr lookup (#1928)
  • Additional commits viewable in compare view

Updates protobuf from 4.24.4 to 5.28.0

Commits
  • 439c42c Updating version.json and repo version numbers to: 28.0
  • c9454f4 Remove --copt="-Werror" from .bazelrc (#18005)
  • f5a1b17 Move -Werror to our test/dev bazelrc files. (#17938)
  • 0c9e14a Merge pull request #17917 from thomasvl/patch_objc_to_28
  • 6a6ebe4 Merge pull request #17919 from protocolbuffers/28.x-202408221734
  • 09ba2bb Updating version.json and repo version numbers to: 28.0-dev
  • e340f52 Updating version.json and repo version numbers to: 28.0-rc3
  • b276420 [ObjC] Issue stderr warnings for deprecated generation options.
  • 13f850d Merge pull request #17913 from protocolbuffers/cp-compat-upgrade
  • 6bf01c5 Binary compatibility shims for GeneratedMessageV3, SingleFieldBuilderV3, Repe...
  • Additional commits viewable in compare view

Updates psutil from 5.9.5 to 6.0.0

Changelog

Sourced from psutil's changelog.

6.0.0

2024-06-18

Enhancements

  • 2109_: maxfile and maxpath fields were removed from the namedtuple returned by disk_partitions()_. Reason: on network filesystems (NFS) this can potentially take a very long time to complete.
  • 2366_, [Windows]: log debug message when using slower process APIs.
  • 2375_, [macOS]: provide arm64 wheels. (patch by Matthieu Darbois)
  • 2396_: process_iter()_ no longer pre-emptively checks whether PIDs have been reused. This makes process_iter()_ around 20x times faster.
  • 2396_: a new psutil.process_iter.cache_clear() API can be used the clear process_iter()_ internal cache.
  • 2401_, Support building with free-threaded CPython 3.13. (patch by Sam Gross)
  • 2407_: Process.connections()_ was renamed to Process.net_connections()_. The old name is still available, but it's deprecated (triggers a DeprecationWarning) and will be removed in the future.
  • 2425_: [Linux]: provide aarch64 wheels. (patch by Matthieu Darbois / Ben Raz)

Bug fixes

  • 2250_, [NetBSD]: Process.cmdline()_ sometimes fail with EBUSY. It usually happens for long cmdlines with lots of arguments. In this case retry getting the cmdline for up to 50 times, and return an empty list as last resort.
  • 2254_, [Linux]: offline cpus raise NotImplementedError in cpu_freq() (patch by Shade Gladden)
  • 2272_: Add pickle support to psutil Exceptions.
  • 2359_, [Windows], [CRITICAL]: pid_exists()_ disagrees with Process_ on whether a pid exists when ERROR_ACCESS_DENIED.
  • 2360_, [macOS]: can't compile on macOS < 10.13. (patch by Ryan Schmidt)
  • 2362_, [macOS]: can't compile on macOS 10.11. (patch by Ryan Schmidt)
  • 2365_, [macOS]: can't compile on macOS < 10.9. (patch by Ryan Schmidt)
  • 2395_, [OpenBSD]: pid_exists()_ erroneously return True if the argument is a thread ID (TID) instead of a PID (process ID).
  • 2412_, [macOS]: can't compile on macOS 10.4 PowerPC due to missing MNT_ constants.

Porting notes

Version 6.0.0 introduces some changes which affect backward compatibility:

  • 2109_: the namedtuple returned by disk_partitions()_' no longer has maxfile and maxpath fields.
  • 2396_: process_iter()_ no longer pre-emptively checks whether PIDs have been reused. If you want to check for PID reusage you are supposed to use Process.is_running()_ against the yielded Process_ instances. That will also automatically remove reused PIDs from process_iter()_ internal cache.

... (truncated)

Commits
  • 3d5522a release
  • 5b30ef4 Add aarch64 manylinux wheels (#2425)
  • 1d092e7 test subprocesses: sleep() with an interval of 0.1 to make the test process m...
  • 5f80c12 Fix #2412, [macOS]: can't compile on macOS 10.4 PowerPC due to missing MNT_...
  • 89b6096 process_iter(): use another global var to keep track of reused PIDs
  • 9421bf8 openbsd: skip test if cmdline() returns [] due to EBUSY
  • 4b1a054 Fix #2250 / NetBSD / cmdline: retry on EBUSY. (#2421)
  • 20be5ae ruff: enable and fix 'unused variable' rule
  • 5530985 chore(ci): update actions (#2417)
  • 1c7cb0a Don't build with limited API for 3.13 free-threaded build (#2402)
  • Additional commits viewable in compare view

Updates tokenizers from 0.19.1 to 0.20.0

Release notes

Sourced from tokenizers's releases.

Release v0.20.0: faster encode, better python support

Release v0.20.0

This release is focused on performances and user experience.

Performances:

First off, we did a bit of benchmarking, and found some place for improvement for us! With a few minor changes (mostly #1587) here is what we get on Llama3 running on a g6 instances on AWS https://github.com/huggingface/tokenizers/blob/main/bindings/python/benches/test_tiktoken.py : image

Python API

We shipped better deserialization errors in general, and support for __str__ and __repr__ for all the object. This allows for a lot easier debugging see this:

>>> from tokenizers import Tokenizer;
>>> tokenizer = Tokenizer.from_pretrained("bert-base-uncased");
>>> print(tokenizer)
Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, ...}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", max_input_chars_per_word=100, vocab={"[PAD]":0, "[unused0]":1, "[unused1]":2, "[unused2]":3, "[unused3]":4, ...}))
>>> tokenizer
Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", ...
Description has been truncated

Bumps the genai-workflow group with 11 updates in the /workflows/charts/huggingface-llm directory:

| Package | From | To |
| --- | --- | --- |
| [accelerate](https://github.com/huggingface/accelerate) | `0.30.1` | `0.33.0` |
| [datasets](https://github.com/huggingface/datasets) | `2.19.0` | `2.21.0` |
| [einops](https://github.com/arogozhnikov/einops) | `0.7.0` | `0.8.0` |
| [mkl-include](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl.html) | `2023.2.0` | `2024.2.1` |
| [mkl](https://github.com/oneapi-src/oneMKL) | `2023.2.0` | `2024.2.1` |
| [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) | `0.10.1` | `0.12.0` |
| [onnxruntime](https://github.com/microsoft/onnxruntime) | `1.17.3` | `1.19.0` |
| [peft](https://github.com/huggingface/peft) | `0.11.1` | `0.12.0` |
| [protobuf](https://github.com/protocolbuffers/protobuf) | `4.24.4` | `5.28.0` |
| [psutil](https://github.com/giampaolo/psutil) | `5.9.5` | `6.0.0` |
| [tokenizers](https://github.com/huggingface/tokenizers) | `0.19.1` | `0.20.0` |



Updates `accelerate` from 0.30.1 to 0.33.0
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](huggingface/accelerate@v0.30.1...v0.33.0)

Updates `datasets` from 2.19.0 to 2.21.0
- [Release notes](https://github.com/huggingface/datasets/releases)
- [Commits](huggingface/datasets@2.19.0...2.21.0)

Updates `einops` from 0.7.0 to 0.8.0
- [Release notes](https://github.com/arogozhnikov/einops/releases)
- [Commits](arogozhnikov/einops@v0.7.0...v0.8.0)

Updates `mkl-include` from 2023.2.0 to 2024.2.1

Updates `mkl` from 2023.2.0 to 2024.2.1
- [Release notes](https://github.com/oneapi-src/oneMKL/releases)
- [Commits](https://github.com/oneapi-src/oneMKL/commits)

Updates `onnxruntime-extensions` from 0.10.1 to 0.12.0
- [Release notes](https://github.com/microsoft/onnxruntime-extensions/releases)
- [Commits](microsoft/onnxruntime-extensions@v0.10.1...v0.12.0)

Updates `onnxruntime` from 1.17.3 to 1.19.0
- [Release notes](https://github.com/microsoft/onnxruntime/releases)
- [Changelog](https://github.com/microsoft/onnxruntime/blob/main/docs/ReleaseManagement.md)
- [Commits](microsoft/onnxruntime@v1.17.3...v1.19.0)

Updates `peft` from 0.11.1 to 0.12.0
- [Release notes](https://github.com/huggingface/peft/releases)
- [Commits](huggingface/peft@v0.11.1...v0.12.0)

Updates `protobuf` from 4.24.4 to 5.28.0
- [Release notes](https://github.com/protocolbuffers/protobuf/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf/blob/main/protobuf_release.bzl)
- [Commits](protocolbuffers/protobuf@v4.24.4...v5.28.0)

Updates `psutil` from 5.9.5 to 6.0.0
- [Changelog](https://github.com/giampaolo/psutil/blob/master/HISTORY.rst)
- [Commits](giampaolo/psutil@release-5.9.5...release-6.0.0)

Updates `tokenizers` from 0.19.1 to 0.20.0
- [Release notes](https://github.com/huggingface/tokenizers/releases)
- [Changelog](https://github.com/huggingface/tokenizers/blob/main/RELEASE.md)
- [Commits](huggingface/tokenizers@v0.19.1...v0.20.0)

---
updated-dependencies:
- dependency-name: accelerate
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: datasets
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: einops
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: mkl-include
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: mkl
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: onnxruntime-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: onnxruntime
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: peft
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
- dependency-name: protobuf
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: psutil
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: genai-workflow
- dependency-name: tokenizers
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: genai-workflow
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Sep 2, 2024
Copy link

github-actions bot commented Sep 2, 2024

Dependency Review

The following issues were found:
  • ✅ 0 vulnerable package(s)
  • ✅ 0 package(s) with incompatible licenses
  • ✅ 0 package(s) with invalid SPDX license definitions
  • ⚠️ 3 package(s) with unknown licenses.
See the Details below.

License Issues

workflows/charts/huggingface-llm/requirements.txt

PackageVersionLicenseIssue Type
mkl-include2024.2.1NullUnknown License
mkl2024.2.1NullUnknown License
protobuf5.28.0NullUnknown License

OpenSSF Scorecard

Scorecard details
PackageVersionScoreDetails
pip/accelerate 0.33.0 🟢 6.4
Details
CheckScoreReason
Code-Review🟢 9Found 27/30 approved changesets -- score normalized to 9
Maintained🟢 1030 commit(s) and 18 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Binary-Artifacts🟢 10no binaries found in the repo
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Security-Policy⚠️ 0security policy file not detected
Fuzzing⚠️ 0project is not fuzzed
Vulnerabilities🟢 100 existing vulnerabilities detected
Packaging🟢 10packaging workflow detected
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
SAST🟢 6SAST tool is not run on all commits -- score normalized to 6
pip/datasets 2.21.0 🟢 5.9
Details
CheckScoreReason
Code-Review🟢 4Found 13/30 approved changesets -- score normalized to 4
Maintained🟢 1030 commit(s) and 11 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Security-Policy🟢 10security policy file detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Packaging⚠️ -1packaging workflow not detected
Binary-Artifacts🟢 10no binaries found in the repo
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Vulnerabilities🟢 100 existing vulnerabilities detected
Fuzzing⚠️ 0project is not fuzzed
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/einops 0.8.0 🟢 4.5
Details
CheckScoreReason
Code-Review⚠️ 1Found 4/24 approved changesets -- score normalized to 1
Maintained🟢 65 commit(s) and 3 issue activity found in the last 90 days -- score normalized to 6
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Packaging⚠️ -1packaging workflow not detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Binary-Artifacts🟢 10no binaries found in the repo
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Security-Policy⚠️ 0security policy file not detected
Fuzzing⚠️ 0project is not fuzzed
Vulnerabilities🟢 100 existing vulnerabilities detected
pip/mkl 2024.2.1 UnknownUnknown
pip/mkl-include 2024.2.1 UnknownUnknown
pip/onnxruntime 1.19.0 🟢 6.8
Details
CheckScoreReason
Code-Review🟢 10all last 30 commits are reviewed through GitHub
Maintained🟢 1030 commit(s) out of 30 and 8 issue activity out of 30 found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no badge detected
Vulnerabilities🟢 10no vulnerabilities detected
Signed-Releases⚠️ 00 out of 5 artifacts are signed or have provenance
Branch-Protection🟢 8branch protection is not maximal on development and all release branches
Security-Policy🟢 10security policy file detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Packaging⚠️ -1no published package detected
License🟢 10license file detected
Token-Permissions⚠️ 0non read-only tokens detected in GitHub workflows
Dependency-Update-Tool🟢 10update tool detected
Binary-Artifacts🟢 10no binaries found in the repo
Fuzzing⚠️ 0project is not fuzzed
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
pip/onnxruntime-extensions 0.12.0 🟢 6.1
Details
CheckScoreReason
Maintained🟢 1030 commit(s) and 8 issue activity found in the last 90 days -- score normalized to 10
Code-Review🟢 9Found 29/30 approved changesets -- score normalized to 9
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Packaging⚠️ -1packaging workflow not detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Security-Policy🟢 10security policy file detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
Fuzzing⚠️ 0project is not fuzzed
Vulnerabilities🟢 100 existing vulnerabilities detected
Binary-Artifacts🟢 7binaries present in source code
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
pip/peft 0.12.0 UnknownUnknown
pip/protobuf 5.28.0 🟢 7
Details
CheckScoreReason
Binary-Artifacts🟢 10no binaries found in the repo
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
CI-Tests🟢 1020 out of 20 merged PRs checked by a CI test -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Code-Review⚠️ 1found 25 unreviewed changesets out of 30 -- score normalized to 1
Contributors🟢 1013 different organizations found -- score normalized to 10
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Dependency-Update-Tool🟢 10update tool detected
Fuzzing🟢 10project is fuzzed
License🟢 9license file detected
Maintained🟢 1030 commit(s) out of 30 and 1 issue activity out of 30 found in the last 90 days -- score normalized to 10
Packaging⚠️ -1no published package detected
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
SAST🟢 3SAST tool is not run on all commits -- score normalized to 3
Security-Policy🟢 10security policy file detected
Signed-Releases⚠️ 00 out of 5 artifacts are signed or have provenance
Token-Permissions🟢 10GitHub workflow tokens follow principle of least privilege
Vulnerabilities🟢 73 existing vulnerabilities detected
pip/psutil 6.0.0 🟢 5.8
Details
CheckScoreReason
Code-Review⚠️ 2Found 8/30 approved changesets -- score normalized to 2
Maintained🟢 1012 commit(s) and 11 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Security-Policy🟢 10security policy file detected
Packaging⚠️ -1packaging workflow not detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Branch-Protection⚠️ 0branch protection not enabled on development/release branches
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Binary-Artifacts🟢 10no binaries found in the repo
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Fuzzing🟢 10project is fuzzed
Vulnerabilities🟢 100 existing vulnerabilities detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
pip/tokenizers 0.20.0 🟢 5.3
Details
CheckScoreReason
Code-Review🟢 8Found 24/27 approved changesets -- score normalized to 8
Maintained🟢 1030 commit(s) and 23 issue activity found in the last 90 days -- score normalized to 10
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
Binary-Artifacts🟢 10no binaries found in the repo
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Security-Policy⚠️ 0security policy file not detected
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Fuzzing⚠️ 0project is not fuzzed
Packaging🟢 10packaging workflow detected
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0
Vulnerabilities🟢 37 existing vulnerabilities detected

Scanned Manifest Files

workflows/charts/huggingface-llm/requirements.txt
  • accelerate@0.33.0
  • datasets@2.21.0
  • einops@0.8.0
  • mkl@2024.2.1
  • mkl-include@2024.2.1
  • onnxruntime@1.19.0
  • onnxruntime-extensions@0.12.0
  • peft@0.12.0
  • protobuf@5.28.0
  • psutil@6.0.0
  • tokenizers@0.20.0
  • accelerate@0.30.1
  • datasets@2.19.0
  • einops@0.7.0
  • mkl@2023.2.0
  • mkl-include@2023.2.0
  • onnxruntime@1.17.3
  • onnxruntime-extensions@0.10.1
  • peft@0.11.1
  • protobuf@4.24.4
  • psutil@5.9.5
  • tokenizers@0.19.1

Copy link
Contributor Author

dependabot bot commented on behalf of github Sep 9, 2024

Superseded by #367.

@dependabot dependabot bot closed this Sep 9, 2024
@dependabot dependabot bot deleted the dependabot/pip/workflows/charts/huggingface-llm/genai-workflow-4b25c72b46 branch September 9, 2024 13:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants