-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have a question about Sen1Floods11 #1
Comments
Hi, Yes, you save the best validation weight and evaluate on the test set, making sure that there is no over/underfitting. NOT sure about the code question, I think the code is already in the repo. |
thank you so much for your timely reply. I appreciate it. And
I Want to AsK How to make sure the bestpt On the validity site is not
overfitting or under fitting? I just chose the way that if the best pity
on the validate set keep not changed after around sixty epoches.Then I
will save the best.pt and test them on the test set. Is My Logic
correct? Or do I need to consider more to make sure they are not
overfitting and under fitting
.
---
sunhanzhang 201918020233
On 2024-12-02 22:00, Ritu Yadav wrote:
Hi,
The IoU is only for the foreground class(flood) otherwise it will be biased toward the background (no flood) class. 0.67 IoU sounds about right.
Yes, you save the best validation weight and evaluate on the test set, making sure that there is no over/underfitting.
NOT sure about the code question, I think the code is already in the repo.
--
Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
"Hello! Have you ever encountered a situation where the validation set
improves, but the test set drops on sen1floods11?"
---
sunhanzhang 201918020233
On 2024-12-02 22:00, Ritu Yadav wrote:
Hi,
The IoU is only for the foreground class(flood) otherwise it will be biased toward the background (no flood) class. 0.67 IoU sounds about right.
Yes, you save the best validation weight and evaluate on the test set, making sure that there is no over/underfitting.
NOT sure about the code question, I think the code is already in the repo.
--
Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello, I have a question I would like to ask you. Recently, I've been working on the Sen1Floods11 dataset as well. After running my own improved U-Net model, I achieved an mIoU of around 0.8, and the IoU for the positive class is around 0.67. However, when I look at related papers, the IoU for the positive class in those papers is much lower. I’m wondering if I might have made a mistake somewhere in the process? When calculating the IoU, I use a global confusion matrix to calculate it all at once. Could there be any issues with this approach?
Additionally, I’d like to ask: on what basis should we save the model weights for testing the performance on the test set? I save the model weights when the best IoU on the validation set occurs and then use that model for testing. Is this the correct approach?
Lastly, I would like to ask if you have reproduced the BASNet model used for the Sen1Floods11 dataset in 2021? If so, would it be possible for you to share the code?
I am a second-year graduate student and would sincerely appreciate your response. Thank you for your time.
The text was updated successfully, but these errors were encountered: