You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How did you get 96X96 high resolution REAL images for comparison?
My understanding was that we take the original dataset (cifar-10, consisting images of dimension 32X32), down-sample the images to (say 4 times, i.e 8X8) and use this down-sampled and corresponding original version for the training pair.
While testing we will feed some down-sampled (i.e 8X8) and expect a 32X32 image which is very close to corresponding original 32X32 image. How come your outputs are 96X96? It seems you first up-scaled the images and then down-sampled. Will it not affect the quality of the output?
The text was updated successfully, but these errors were encountered:
How did you get 96X96 high resolution REAL images for comparison?
My understanding was that we take the original dataset (cifar-10, consisting images of dimension 32X32), down-sample the images to (say 4 times, i.e 8X8) and use this down-sampled and corresponding original version for the training pair.
While testing we will feed some down-sampled (i.e 8X8) and expect a 32X32 image which is very close to corresponding original 32X32 image. How come your outputs are 96X96? It seems you first up-scaled the images and then down-sampled. Will it not affect the quality of the output?
The text was updated successfully, but these errors were encountered: