Skip to content

Commit 38deac1

Browse files
authoredNov 5, 2024
Revert "Fix bug when loading 4bit checkpoint quantized in INC (huggingface#1447)"
This reverts commit 4bdf434.
1 parent 309e0c4 commit 38deac1

File tree

1 file changed

+0
-3
lines changed

1 file changed

+0
-3
lines changed
 

‎examples/text-generation/utils.py

-3
Original file line numberDiff line numberDiff line change
@@ -269,9 +269,6 @@ def setup_model(args, model_dtype, model_kwargs, logger):
269269
original_model=org_model,
270270
**model_kwargs,
271271
)
272-
# TODO: This will be removed in v1.19 Synapse release
273-
# the loaded model should have the same dtype as original_model
274-
model = model.to(model_kwargs["torch_dtype"])
275272
else:
276273
if args.assistant_model is not None:
277274
assistant_model = AutoModelForCausalLM.from_pretrained(

0 commit comments

Comments
 (0)