Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion on hyperparamter tuning on other datasets #5

Open
Barcavin opened this issue Jun 24, 2023 · 1 comment
Open

Suggestion on hyperparamter tuning on other datasets #5

Barcavin opened this issue Jun 24, 2023 · 1 comment

Comments

@Barcavin
Copy link

Hi,

I am trying to apply NCN/NCNC to other graphs. In the README, it seems there are a lot of hyperparameters to tweak with. Are there any general suggestions about where to start the hyperparameter tuning?

Thanks,

@Xi-yuanWang
Copy link
Contributor

Hi,

We upload hyperparamopt.py in the refactor branch. It uses optuna to do hyperparameter tuning. You can also check the parseargs function in NeighborOverlap.py for the meaning of each hyperparameter.

We also used the following tricks.

  1. We first optimize the hyperparameters of NCN, then use them for NCNC with a few modifications.

  2. Reduce the search space during optimization. The choice of some parameters, like mplayers, model, jk, res, converges very fast. So we fix these hyperparameters. Moreover, you may find that large dp and small lr leads to poor performance, so you can reduce their search range.

  3. num_trial: We run the optimization until we get a satisfactory result. For ddi, we run about 900 trials. For ppa, we use 60.

  4. runs: the number of repeated runs in each trial. More runs lead to a more stable score. So we use 1 in the beginning and 3 later. For ddi, we use 10 as the score is very unstable.

Feel free to ask me if there exists any problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants