Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prediction format #3

Open
mjanddy opened this issue Apr 2, 2019 · 12 comments
Open

prediction format #3

mjanddy opened this issue Apr 2, 2019 · 12 comments

Comments

@mjanddy
Copy link

mjanddy commented Apr 2, 2019

what's the Format of prediction results?

@KevinZhang1201
Copy link

you can see it at homepage of WIDER FACE and download Examples and Format of submissions.
Link: http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/example/Submission_example.zip

first you need to create a directory for all prediction results, like:

-- WFPred
-- 0--Parade
-- 0_Parade_marchingband_1_20.txt
-- 0_Parade_marchingband_1_74.txt
...
-- 1--Handshaking
...

In each files the detection results should be written as:
image_name
the number fo faces
x, y, w, h, confidence
...

@monkeyMirandar
Copy link

**

  1. My well-trained model is a.pth file, and my well-tested model is a.pkl file. How to make the prediction format if you want to use this method for assessment?

**

@KevinZhang1201
Copy link

@monkeyMirandar Sorry, i'm not very sure what you mean. But I think you need to writte your own script to make the prediction on WIDER Face dataset with your own models and generate the prediction files as above mentioned.

@monkeyMirandar
Copy link

monkeyMirandar commented Jul 31, 2019 via email

@KevinZhang1201
Copy link

Prediction dir
└─── 0--Parade
├───└─── 0_Parade_marchingband_1_20.txt
├───├─── 0_Parade_marchingband_1_74.txt
├───├─── ...
├─── 1--Handshaking
├───└─── 1_Handshaking_Handshaking_1_35.txt
├───├─── 1_Handshaking_Handshaking_1_94.txt
├───├─── ...
├─── ...
the files should be organized as this.

image

image

@monkeyMirandar
Copy link

monkeyMirandar commented Jul 31, 2019 via email

@KevinZhang1201
Copy link

if you use python to test your model, you can write it like this:
image

  1. iterate all images in wider face dataset
  2. apply your to each images, generate the prediction results and write them into cooresponding file

@foocker
Copy link

foocker commented Dec 10, 2019

我的训练模型已经出来了!怎么才可以得到这个文件夹呀?可以具体说说吗? 发自我的iPhone

------------------ Original ------------------ From: KevinZhang1201 notifications@github.com Date: Wed,Jul 31,2019 5:17 PM To: wondervictor/WiderFace-Evaluation WiderFace-Evaluation@noreply.github.com Cc: monkeyMirandar 1391069856@qq.com, Mention mention@noreply.github.com Subject: Re: [wondervictor/WiderFace-Evaluation] prediction format (#3) Prediction dir └─── 0--Parade ├───└─── 0_Parade_marchingband_1_20.txt ├───├─── 0_Parade_marchingband_1_74.txt ├───├─── ... ├─── 1--Handshaking ├───└─── 1_Handshaking_Handshaking_1_35.txt ├───├─── 1_Handshaking_Handshaking_1_94.txt ├───├─── ... ├─── ... the files should be organized as this. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

....

@foocker
Copy link

foocker commented Dec 10, 2019

if you use python to test your model, you can write it like this:
image

1. iterate all images in wider face dataset

2. apply your to each images, generate the prediction results and write them into cooresponding file

the *.mat file are corresponding test dataset? why not val dataset. so, evaluate on val dataset, you should rewrite evaluation code?

i have test on val dataset, it's work! but i'm not plan to read the evaluate.py . so, what's logic of evaluate.py? such ,
easy, midum, hard, in *.mat, but val dataset has no these class, how it evaluate results that preddicted from val dataset, for *.mat easy,midum, hard class, but in *.mat.

@Hiteshsaai
Copy link

can the prediction results be in text format?, meanwhile the ground truth is in .mat format

@wondervictor
Copy link
Owner

prediction results are the same as the submission examples (txt files)

@NguyenKhacTuanAnh
Copy link

prediction results are the same as the submission examples (txt files)

why do it have this error? I have run python setup.py build_ext --inplace
and python3 evaluation.py -p results/ -g ground_truth/

Reading Predictions : 100%|█████████████████████████████████████████████████████| 61/61 [00:00<00:00, 603.65it/s]
Processing easy: 100%|█████████████████████████████████████████████████████████| 61/61 [00:00<00:00, 4502.23it/s]
Processing medium: 100%|███████████████████████████████████████████████████████| 61/61 [00:00<00:00, 4714.26it/s]
Processing hard: 100%|█████████████████████████████████████████████████████████| 61/61 [00:00<00:00, 4092.99it/s]
==================== Results ====================
Easy Val AP: 0.0
Medium Val AP: 0.0
Hard Val AP: 0.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants