Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of the regularization loss #109

Open
adosar opened this issue Apr 9, 2024 · 1 comment
Open

Implementation of the regularization loss #109

adosar opened this issue Apr 9, 2024 · 1 comment

Comments

@adosar
Copy link

adosar commented Apr 9, 2024

Hello and congrats for this nice repo!

I was looking in the implementation of the regularization loss and I don't know if it matches the original paper.

In the feature_transform_regularizer:

loss = torch.mean(torch.norm(torch.bmm(trans, trans.transpose(2,1)) - I, dim=(1,2)))

Why not:

loss = torch.mean(torch.norm(torch.bmm(trans, trans.transpose(2,1)) - I, dim=(1,2)).pow(2))

In the original paper they use:
$L_\text{reg} = ||I - AA^T||^2$

@adosar adosar changed the title Regularization loss Implementation of the regularization loss Apr 9, 2024
@shan-zhong
Copy link

I found that the regularization loss has only been used for trans_feat, and the loss was simply added to the total loss. Why the regularization loss of trans has been ignored. And I think it may be better if the regularization loss is only used to update the parameter in T-net, rather than the whole net.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants