You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
thanks for your brilliant works.
I tried to implement your idea with PyTorch since I'm not familiar with Tensorflow.
These 2 functions are my implementation of Laplacian Smoothing and Laplacian sharpening, where A is the adjacency matrix: def laplacian_smooth(A): """ args: A: batches of adjacency matrices of symmetric Interactions size of A: [batch_size, num_edgeTypes, num_nodes, num_nodes] return: A_norm = (D**-0.5)A(D**-0.5), where D is the diagonal matrix of A """ I = torch.eye(A.size(-1)) I = I.unsqueeze(0).unsqueeze(1) I = I.expand(A.size(0), A.size(1), I.size(2), I.size(3)) #size: [batch_size, num_edgeTypes, num_atoms, num_atoms] A_p = A+I D_values = A_p.sum(-1) #Degree values; size: [batch_size, num_nodes] D_values_p = torch.pow(D_values, -0.5) D_p = torch.diag_embed(D_values_p) #size: [batch_size, num_nodes, num_nodes] return torch.matmul(D_p, torch.matmul(A_p, D_p))
def laplacian_sharpen(A): """ args: A; batches of adjacency matrices corresponding to edge types size: [batch_size, num_edgeTypes, num_nodes, num_nodes] """ I = torch.eye(A.size(-1)) I = I.unsqueeze(0).unsqueeze(1) I = I.expand(A.size(0), A.size(1), I.size(2), I.size(3)) #size: [batch_size, num_edgeTypes, num_atoms, num_atoms] Ap = 2*I-A D_values = A.sum(-1)+2 #shape: [batch_size, num_edgeTypes, num_atoms] D_values_p = torch.pow(D_values, -0.5) D_p = torch.diag_embed(D_values_p)
return torch.matmul(D_p, torch.matmul(Ap, D_p))
Are there any problems with my implementation?
I test the model on the Cora dataset.
I use the Frobenius norm as reconstruction error, i.e. torch.norm(X-X_rec, p="pro") to train the 2-layer encoder and 2-layer decoder to minimise the reconstruction error.
After training, I use K-Means(n_cluster=7) to cluster the latents and found the NMI of the clustering results is very low compared with the results in your paper. Could you tell me more details of your model, such as which activation function used in the output layer of the encoder, what are the dimensions of hidden layers and the output layer of the encoder?
The text was updated successfully, but these errors were encountered:
Hello,
thanks for your brilliant works.
I tried to implement your idea with PyTorch since I'm not familiar with Tensorflow.
These 2 functions are my implementation of Laplacian Smoothing and Laplacian sharpening, where A is the adjacency matrix:
def laplacian_smooth(A):
"""
args:
A: batches of adjacency matrices of symmetric Interactions
size of A: [batch_size, num_edgeTypes, num_nodes, num_nodes]
return: A_norm = (D**-0.5)A(D**-0.5), where D is the diagonal matrix of A
"""
I = torch.eye(A.size(-1))
I = I.unsqueeze(0).unsqueeze(1)
I = I.expand(A.size(0), A.size(1), I.size(2), I.size(3))
#size: [batch_size, num_edgeTypes, num_atoms, num_atoms]
A_p = A+I
D_values = A_p.sum(-1) #Degree values; size: [batch_size, num_nodes]
D_values_p = torch.pow(D_values, -0.5)
D_p = torch.diag_embed(D_values_p) #size: [batch_size, num_nodes, num_nodes]
return torch.matmul(D_p, torch.matmul(A_p, D_p))
def laplacian_sharpen(A):
"""
args:
A; batches of adjacency matrices corresponding to edge types
size: [batch_size, num_edgeTypes, num_nodes, num_nodes]
"""
I = torch.eye(A.size(-1))
I = I.unsqueeze(0).unsqueeze(1)
I = I.expand(A.size(0), A.size(1), I.size(2), I.size(3))
#size: [batch_size, num_edgeTypes, num_atoms, num_atoms]
Ap = 2*I-A
D_values = A.sum(-1)+2 #shape: [batch_size, num_edgeTypes, num_atoms]
D_values_p = torch.pow(D_values, -0.5)
D_p = torch.diag_embed(D_values_p)
return torch.matmul(D_p, torch.matmul(Ap, D_p))
Are there any problems with my implementation?
I test the model on the Cora dataset.
I use the Frobenius norm as reconstruction error, i.e.
torch.norm(X-X_rec, p="pro")
to train the 2-layer encoder and 2-layer decoder to minimise the reconstruction error.After training, I use K-Means(n_cluster=7) to cluster the latents and found the NMI of the clustering results is very low compared with the results in your paper. Could you tell me more details of your model, such as which activation function used in the output layer of the encoder, what are the dimensions of hidden layers and the output layer of the encoder?
The text was updated successfully, but these errors were encountered: