Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't reproduce the result #5

Open
fatcatZF opened this issue Jan 4, 2022 · 0 comments
Open

can't reproduce the result #5

fatcatZF opened this issue Jan 4, 2022 · 0 comments

Comments

@fatcatZF
Copy link

fatcatZF commented Jan 4, 2022

Hello,
thanks for your brilliant works.
I tried to implement your idea with PyTorch since I'm not familiar with Tensorflow.
These 2 functions are my implementation of Laplacian Smoothing and Laplacian sharpening, where A is the adjacency matrix:
def laplacian_smooth(A):
"""
args:
A: batches of adjacency matrices of symmetric Interactions
size of A: [batch_size, num_edgeTypes, num_nodes, num_nodes]
return: A_norm = (D**-0.5)A(D**-0.5), where D is the diagonal matrix of A
"""
I = torch.eye(A.size(-1))
I = I.unsqueeze(0).unsqueeze(1)
I = I.expand(A.size(0), A.size(1), I.size(2), I.size(3))
#size: [batch_size, num_edgeTypes, num_atoms, num_atoms]
A_p = A+I
D_values = A_p.sum(-1) #Degree values; size: [batch_size, num_nodes]
D_values_p = torch.pow(D_values, -0.5)
D_p = torch.diag_embed(D_values_p) #size: [batch_size, num_nodes, num_nodes]
return torch.matmul(D_p, torch.matmul(A_p, D_p))

def laplacian_sharpen(A):
"""
args:
A; batches of adjacency matrices corresponding to edge types
size: [batch_size, num_edgeTypes, num_nodes, num_nodes]
"""
I = torch.eye(A.size(-1))
I = I.unsqueeze(0).unsqueeze(1)
I = I.expand(A.size(0), A.size(1), I.size(2), I.size(3))
#size: [batch_size, num_edgeTypes, num_atoms, num_atoms]
Ap = 2*I-A
D_values = A.sum(-1)+2 #shape: [batch_size, num_edgeTypes, num_atoms]
D_values_p = torch.pow(D_values, -0.5)
D_p = torch.diag_embed(D_values_p)

return torch.matmul(D_p, torch.matmul(Ap, D_p))

Are there any problems with my implementation?

I test the model on the Cora dataset.
I use the Frobenius norm as reconstruction error, i.e. torch.norm(X-X_rec, p="pro") to train the 2-layer encoder and 2-layer decoder to minimise the reconstruction error.
After training, I use K-Means(n_cluster=7) to cluster the latents and found the NMI of the clustering results is very low compared with the results in your paper. Could you tell me more details of your model, such as which activation function used in the output layer of the encoder, what are the dimensions of hidden layers and the output layer of the encoder?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant