Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault (core dumped) #2

Open
YLONl opened this issue Jan 15, 2020 · 7 comments
Open

Segmentation fault (core dumped) #2

YLONl opened this issue Jan 15, 2020 · 7 comments

Comments

@YLONl
Copy link

YLONl commented Jan 15, 2020

Who got this error?
I googled and got some reasons about core and GPU version。
But i dont konw how to solve?

@YLONl
Copy link
Author

YLONl commented Jan 15, 2020

i know the wrong code.
"import networks."

@Serge-weihao
Copy link
Owner

Serge-weihao commented Jan 15, 2020

what does the Traceback show?

@YLONl
Copy link
Author

YLONl commented Jan 15, 2020

No Traceback shows. Just the tip.

@kumartr
Copy link

kumartr commented May 26, 2020

I would like to know more about the energy_H and energy_W variables, what the compute and how they help to acheive.
Also how the Criss Cross Attention is achieved
energy_H = (torch.bmm(proj_query_H, proj_key_H)+self.INF(m_batchsize, height, width)).view(m_batchsize,width,height,height).permute(0,2,1,3)
energy_W = torch.bmm(proj_query_W, proj_key_W).view(m_batchsize,height,width,width)
concate = self.softmax(torch.cat([energy_H, energy_W], 3))

@Serge-weihao
Copy link
Owner

to aggregate the values from the same column(energy_H) and row(energy_W ) of the query. self.INF will make one of the overlapped position to be zero.

@kumartr
Copy link

kumartr commented May 27, 2020

Thanks a lot Serge for your kind reply,
I will try to relook at your code with this insight.
One more question - Where are the H+W-1 'channels' for the Attention maps computed ?

Would it be possible to connect sometime over short zoom call, to clarify few other points
My email address is kumartr@gmail.com
My LinkedIn link is as below
https://www.linkedin.com/in/kumartr/

@Serge-weihao
Copy link
Owner

concate = self.softmax(torch.cat([energy_H, energy_W], 3)) computes the attention maps and one overlapped position was seted to 0 by the self.INF, so there are H+W-1 non-zero values + 1 zero value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants