You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your valuable discovery. We will further explore the influence of the model's rank. In our preliminary exploratory experiments, we found that the rank has some impact on LCM, so we carelessly followed the experimental setup at that time and did not perform an ablation on rank. Thanks again for pointing it out.
No problem, and to be fair, sometimes training at a higher rank and doing SVD is better than training at lower rank.
Also, compared to LCM, I am greatly impressed by how much less reliant on the LoRA TCD seems to be. The LoRA fixes contrast, but the base image is fully formed and not a blurry mess at 6 steps.
This was just something I discovered as I and the other devs at SDNext were adding your sampler.
Thanks for this amazing sampler, it's way better than LCM in my estimation.
The file size of the LoRA could be much smaller with no ill effect however.
Full rank file size: 375.6mb
Resizing to rank 4: File size 23.8mb
Average Frobenius norm retention: 91.92% | std: 0.101
Resizing to rank 2: File size 12.1mb
Average Frobenius norm retention: 88.57% | std: 0.140
The text was updated successfully, but these errors were encountered: