EyeNet++ is a novel semantic segmentation network designed for outdoor 3D point cloud datasets. Inspired by the human vision field, EyeNet++ introduces a multi-scale and multi-density input framework combined with parallel processing architecture. These innovations enable enhanced coverage and accurate feature extraction for dense point cloud data.
- Multi-Scale and Multi-Density Input Scheme: Captures diverse spatial features using dense central and sparse peripheral regions.
- Dense Local Feature Aggregation (DLFA): Efficiently extracts contextual information from objects of varying sizes.
- Connection and Feature Merging Blocks: Facilitate effective exchange and integration of features across multi-scale inputs.
- Proven Performance: Demonstrated state-of-the-art results on benchmark datasets such as SensatUrban, Toronto3D, and YUTO Semantic.
The source code for EyeNet++ and pre-trained models will be released soon. Stay tuned for updates!
EyeNet++ has been extensively evaluated on several large-scale datasets:
The GitHub repository will include:
- Implementation code for EyeNet++ in TensorFlow.
- Pre-trained models for easy reproducibility.
- Training and evaluation scripts.
- Documentation to guide users through setup and usage.
If you find this work helpful, please cite: paper under review