Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TCD LoRA is higher rank than It needs to be #2

Open
AI-Casanova opened this issue Mar 1, 2024 · 2 comments
Open

TCD LoRA is higher rank than It needs to be #2

AI-Casanova opened this issue Mar 1, 2024 · 2 comments

Comments

@AI-Casanova
Copy link

Thanks for this amazing sampler, it's way better than LCM in my estimation.

The file size of the LoRA could be much smaller with no ill effect however.

Full rank file size: 375.6mb
Resizing to rank 4: File size 23.8mb
Average Frobenius norm retention: 91.92% | std: 0.101

Resizing to rank 2: File size 12.1mb
Average Frobenius norm retention: 88.57% | std: 0.140

image

@mhh0318
Copy link
Collaborator

mhh0318 commented Mar 2, 2024

Thank you for your valuable discovery. We will further explore the influence of the model's rank. In our preliminary exploratory experiments, we found that the rank has some impact on LCM, so we carelessly followed the experimental setup at that time and did not perform an ablation on rank. Thanks again for pointing it out.

@AI-Casanova
Copy link
Author

No problem, and to be fair, sometimes training at a higher rank and doing SVD is better than training at lower rank.

Also, compared to LCM, I am greatly impressed by how much less reliant on the LoRA TCD seems to be. The LoRA fixes contrast, but the base image is fully formed and not a blurry mess at 6 steps.

This was just something I discovered as I and the other devs at SDNext were adding your sampler.

Thanks again for a great sampler!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants