This repository hosts the contributor source files for the googlenet model. ModelHub integrates these files into an engine and controlled runtime environment. A unified API allows for out-of-the-box reproducible implementations of published models. For more information, please visit www.modelhub.ai or contact us info@modelhub.ai.
id | 948e93d7-bc36-4c39-9640-dc3345269fe7 |
application_area | ImageNet |
task | Classification |
task_extended | ImageNet classification |
data_type | Image/Photo |
data_source | http://www.image-net.org/challenges/LSVRC/2014/ |
title | Going Deeper With Convolutions |
source | Proceedings of the IEEE conference on computer vision and pattern recognition |
url | http://openaccess.thecvf.com/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf |
year | 2015 |
authors | Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich |
abstract | We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. |
google_scholar | https://scholar.google.com/scholar?oi=bibs&hl=en&cites=17799971764477278135&as_sdt=5 |
bibtex | @INPROCEEDINGS{7298594, author={C. Szegedy and Wei Liu and Yangqing Jia and P. Sermanet and S. Reed and D. Anguelov and D. Erhan and V. Vanhoucke and A. Rabinovich}, booktitle={2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, title={Going deeper with convolutions}, year={2015}, volume={}, number={}, pages={1-9}, keywords={convolution;decision making;feature extraction;Hebbian learning;image classification;neural net architecture;resource allocation;convolutional neural network architecture;resource utilization;architectural decision;Hebbian principle;object classification;object detection;Computer architecture;Convolutional codes;Sparse matrices;Neural networks;Visualization;Object detection;Computer vision}, doi={10.1109/CVPR.2015.7298594}, ISSN={1063-6919}, month={June},} |
description | 22 layers deep network. The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. |
provenance | https://github.com/onnx/models/tree/master/bvlc_googlenet |
architecture | Convolutional Neural Network (CNN) |
learning_type | Supervised learning |
format | .onnx |
I/O | model I/O can be viewed here |
license | model license can be viewed here |
To run this model and view others in the collection, view the instructions on ModelHub.
To contribute models, visit the ModelHub docs.