LSUN dataset indoor scene layout labeling via transfer learning with segmentation ResNet101
- cv2
- PyTorch
- Torchvision
The root directory should be namded "Bounding_box/"
- To label image using trained model:
$python main.py "filename"
- To train a new model:
$python learn.py "filename"
alternatively, utilize cloud GPUs with "BBox.ipynb". This requires uploading LSUN data as well.
Trained models should be saved in the root directory
Modify training paramters in "params.py"
Download LSUN room dataset at LSUN and place under "data/" directory
Credit: liamw96
Sample labeling:
Scenes are assigned four labels: walls (includinig ceiling and floor), ceiling lines, floor lines and wall lines
see the LICENSE.md file for details
This project is generally implemented based on:
- Mallya, A. & Lazebnik, S. (2015). Learning Informative Edge Maps for Indoor Scene Layout Prediction
- Schwing, A. G. & Urtasun, R. (2012). Efficient Exact Inference for 3D Indoor Scene Understanding
- Hedau, V., Hoiem, D. & Forsyth, D. A. (2009). Recovering the spatial layout of cluttered rooms
- Rother, C. (2000). A New Approach for Vanishing Point Detection in Architectural Environments
- Tardif, J.-P. (2009). Non-iterative approach for fast and accurate vanishing point detection
- Denis, P., Elder, J. H. & Estrada, F. J. (2008). Efficient Edge-Based Methods for Estimating Manhattan Frames in Urban Imagery
Contact me for more detailed report on the implementation.