Skip to content

[GHA] Use TinyLlama-1.1B-Chat-v1.0 instead of LaMini-GPT-124M #7673

[GHA] Use TinyLlama-1.1B-Chat-v1.0 instead of LaMini-GPT-124M

[GHA] Use TinyLlama-1.1B-Chat-v1.0 instead of LaMini-GPT-124M #7673

Re-run triggered March 3, 2025 10:40
Status Cancelled
Total duration 36m 37s
Artifacts

lcm_dreamshaper_cpp.yml

on: pull_request
lcm_dreamshaper_v7_cpp-linux
24m 37s
lcm_dreamshaper_v7_cpp-linux
lcm_dreamshaper_v7_cpp-windows
20m 25s
lcm_dreamshaper_v7_cpp-windows
ci/gha_overall_status_lcm
0s
ci/gha_overall_status_lcm
Fit to window
Zoom out
Zoom in

Annotations

3 errors and 1 warning
lcm_dreamshaper_v7_cpp-windows
Canceling since a higher priority waiting request for 'refs/pull/1816/merge-lcm-dreamshaper-cpp' exists
lcm_dreamshaper_v7_cpp-windows
The operation was canceled.
ci/gha_overall_status_lcm
Process completed with exit code 1.
lcm_dreamshaper_v7_cpp-windows
Cache not found for keys: setup-python-Windows-x64-python-3.9.13-pip-91b55077a3f6a7031e0fcb22c8cf06a1dfebd9449ee34265a32fc99d2c3f4b5c, setup-python-Windows-x64-python-3.9.13-pip