-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Falcon-40B convert successfully but output_dir has not been created #355
Comments
@JunxiChhen I do not see in your logs that model successfully converted (here only logs begins), is there possibility that this directory does not have write permissions? |
We have write permission actually. We can successfully convert models in same directory including llama2, gpt-j, ... Could you have a try and check if there's same issue? Thanks. @eaidova |
@JunxiChhen Unfortunately, my working machine does not have enough RAM for conversion falcon-40b, so I can not check it, but if you can share more logs for conversion (better in text format instead of screenshot), it will be very helpful) |
benchmark_latency__bfloat16___04-23-24-06-25-42.log |
@JunxiChhen possibly it is some logging issue, but on my side I can not see the same behaviour. On my end model conversion failed with I prepared fix for that huggingface/optimum-intel#685 |
@JunxiChhen could you please check falcon with the latest openvino.genai? the fix in optimum-intel was merged, commit is moved in llm_bench requirements.txt |
@JunxiChhen maybe setting of these variables may help avoiding reaching hf hub https://huggingface.co/docs/transformers/main/en/installation#offline-mode? |
We downgraded the transformers version to 4.39.1 and passed. Thanks @eaidova |
Convert cmd:
Benchmarking cmd:
The text was updated successfully, but these errors were encountered: