You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 9, 2023. It is now read-only.
I suppose it is the expected behavior but the paper (https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf, oneshot1.pdf) says, the former part of the proposed loss function is:
which seems to be the opposite number of binary_crossentropy and confuses me a lot. Does the author mean:
Or could you please provide some more explanation?
Thanks&Best Regards
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi, there!
Thanks for your great work, it helps a lot and the application of siamese networks for one-shot learning also deserves further research.
However, I find you implement the loss function as
binary_crossentropy
, in the following way:Siamese-Networks-for-One-Shot-Learning/siamese_network.py
Line 144 in 8aae456
I suppose it is the expected behavior but the paper (https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf, oneshot1.pdf) says, the former part of the proposed loss function is:


which seems to be the opposite number of
binary_crossentropy
and confuses me a lot. Does the author mean:Or could you please provide some more explanation?
Thanks&Best Regards
The text was updated successfully, but these errors were encountered: