-
Notifications
You must be signed in to change notification settings - Fork 0
Boulder Activity
Brief overview presentation today, PPTX: Antarctic Feature Classification
- Due to licensing issues that weren't initially clear, and a lack of ability to deploy large-scale (without significant $$), we gave up using the ENVI Deep Learning Module with Pytorch for Antarctic image feature classification. It is a great starter interface to learn the ins & outs of model building & training, but difficult to deploy for the purposes of the IceBerg project.
- We are currently moving forward using the "Mask R-CNN" libraries, and have cloned a copy of the open-access Mask R-CNN library, built in Python 3 on the Keras and Tensorflow libraries. Code development is based upon that for now.
- Acquired 3,153 multispectral ortho images from the PGC, plus the 13 images on the Google Drive from the LandCover group which we began with (thanks Helen!). Most are 4-band WV-2, with 132 4-band QB-2, and 640 8-band WV-3 images, 2-m resolution ortho-rectified, 16-bit integer format. We could post these on the Google Drive if it could handle it (~3 TB total), otherwise could distribute to the rest of ICEBERG another way perhaps (an ftp tunnel from a folder on our local server, maybe?).
- Main focus right now is in building adequate training datasets. Currently looking at an iterative approach, starting with very-weakly-classified (relatively poor performing) model based on hand-annotated images, filtering hand-selected good results, and retraining with the expanded dataset.
- Continuing with the development of Antarctic feature classification model using ENVI Deep Learning Module & Pytorch. Have a subset of polar-stereo images but waiting on a PGC request for more images (100, enough for a good training set). Model is not well-trained yet and still being tuned. Also being installed on a local GPU server (in progress) to enhance performance on model. Plan to have some initial results to display next All-hands-meeting, if we get enough imagery.
- Antarctica Feature Classification: Getting Antarctic Multispectral WV data from PGC for land-surface classification (not yet publicly available, housed at PGC but not publicly distributed).
- Using ENVI Deep Learning module (built upon PyTorch) to derive training set for classification of ROCK, SNOW, CREVASSES, WATER. Initially using an auto-derived rock mask (hand filtering false-positives in that dataset).
-
Substantial progress on TMA photo processing -- 1361 viable flightlines selected, image subsets being processed and automated in AgiSoft PhotoScan Pro on CUDA node at CU Boulder.
-
Using previous SIFT code, modified to use OpenCV libraries rather than CPU executable to enable greater flexibility, for terrain matching and fiduciary selection.
-
Feature Classification algorithm: working with CU EarthLab professionals to generate CNN image feature classification algorithm suitable for use on high-res color mosaics of Antarctica. Meetings from Weds 4/17 to follow up next week.
- Handled ASIFT Scratch folder issue ("tile_planner.py Attribute Error"). The code was working if users read the README before executing.
- Uploaded ~105 cropped TMA images (along with uncropped versions for reference) to the Google Drive for testing & use on ASIFT algorithm.
- Let Aymen know that the "CudaSIFT" executable outlined in the last telecon (from the Git "CudaSIFT" repository) is not executing the same algorithm as the "fast_imas_IPOL" ASIFT executable, which performs a set of affine transformations on each image before running the hyper-descriptors. Although a CUDA-based GPU-implementation will invariably be faster than CPUs, the algorithm comparison shown last week was apples-to-oranges, since CudaSIFT is solving a simpler problem than the CPU-based ASIFT implementation. Any SIFT implementation such as CudaSIFT can, however, be transformed into an ASIFT implementation by affine-transforming images before searching, and running the algorithm across a spectrum of affine-transformed input images. That requires more code, a larger search, and greater execution time, however.
Work toward TMA Science Objectives:
- Generated 1,839 TMA Flightline composites for high-level image set selection over the entire TMA dataset, to efficiently sort and modify 98,000 image triplets into high-quality usable subsets.
- Subsetting TMA dataset for structure-from-motion DEM generation and matching.
- Currently processing TMA fiduciary-point-finder and auto-cropping tool.
Due to inter-team issues on this workflow, this segment of the project is being rescoped, at least temporarily. The Boulder team is continuing to work on a meaningful science algorithm, and finally making great headway.
- Using AGISoft Pro to generate high-density point-cloud images from overlapping Antarctic TMA image files.
- Must convert to gridded elevation DEMs in pixel-space
- Then image-match on DEMs, per previous discussions.
- Finally, re-iterate surface generation with new point matches, to generate a geo-located higher-accuracy surface with minimum distortion.
- Quantify errors in elevations.
- Also putting together an auto-fiduciuary-point-finder, to auto-crop the images. This is not part of the ICEBerg code repository but is necessary for our specific science goal.
- ASIFT ticket #25 is being compiled (large list of TMA imagery, both pre- and post-cropped), and matching/overlapping WorldView images. Will dump into the Google Drive soon.
(Combined with updates on previous tasks)
- Large upload to the ASIFT/mmacferrin branch, updated code which will be merged with the ASIFT/devel branch with a pull request, so updates can be more hand-in-hand moving forward.
- Included README.md files for the project, plus for each individual phase. Includes command-line-execution instructions as well as primary Python function interfaces. Some README.md files still need a bit of cleaning up (messy formatting, etc). Not-yet-implemented Python scripts are noted.
- Ticket #11: RANSAC filter code updated, including RANSAC filter (closed)
- Ticket #17: Directory structure reorganized into "src" and "doc" folders, per Aymen's suggestions (finished)
- Ticket #14: Updated Workflow Diagram. See images below. Still need to update diagram with more detail for Phases 3 and 4 (will close when finished)
- Ticket #19: Removed extraneous files in latest commit. (closed)
- Ticket #21: Updated README.md files and got rid of .docx verions. (closed)
- Ticket #20: Got rid of "ASIFT_Scratch" directory and updated the main "README.md" file to describe setup of the Scratch directory on the target machine.
- Ticket #22: Created a pull request to merge "mmacferrin" branch with "devel" branch, iterate on that until branches are compatable and merged. "fast_imas_IPOL" executable code eliminated from the main branch, Iaonnis will create a seperate branch from the original repository to keep a snapshot of that executable separate. Once Ionnis has made the "fast_imas_IPOL" clone, the ticket will close.
- Algorithm Implementation: When testing code to auto-locate fiduciary points on airborne images, I found that if I used OpenCV algorithms exclusively, rather than the fast_imas_IPOL, I got far superior results. Still testing to see if this is robust, and if so, will update with two versions of the asift_executable.py code, one using "fast_imas_IPOL" and another using OpenCV exclusively, so that Aymen can test it himself.
On Monday 2/11, had a telecon with Aymen, Ioannis, and Matteo. Got a good working plan together for moving forward. Parts are in motion now.
Also on Monday 2/11, had a conversation with CS Professors Chris Heckman (CU Boulder) and David Crandall (Indiana U). They had good feedback and general ideas on the algorithms, and wish to continue collaboration in the near future on this type of project. Matteo and Aymen were on the call, and noted that it was helpful for them to see the total problem laid out that we were trying to solve.
Workflow Diagram, ASIFT Use Case:
Workflow Diagram, Phase 1:
Immediate tickets (next in line to resolve before next telecon) from Boulder:
- Ticket #10: Update SRS Document with requirements from our end. This is easier now that we have a better idea of how the final algorithm will be working. Although it may still take some iterations before we're done.
- Ticket #14: Finish fleshing out Phase 3 and 4 workflow diagrams.
- Ticket #23: Iterate with Aymen on his testing of ASIFT-variant algorithms.
- Ticket #24: Quantifying success/failure metrics for larger-scale testing and automating.
Geo-locating historic photos has proven difficult, at best. We are working on the algorithm to get a working version and making progress. Mike will discuss today the ideas for a re-work of the way the algorithm works. My AGU Poster: Automatically geolocating historic airborne imagery toward constructing an 80-year time series of Antarctic outlet glacier extents and volumes can be used to illustrate the point.
The ASIFT project has moved into a 3-phase design:
Images ("source" and "target") are tiled and all image tiles are contrast-enhanced. Tiles are temporarily stored in an "ASIFT_Scratch" working directory, and (optionally) deleted after use.
Keypoints are generated using the SURF algorithm (the free version of ASIFT). Keypoint and output-image files are stored in the same temporary scratch directory as the "source" image.
The SURF algorithm produces a large number of false positive keypoints, especially in highly-variable terrain. RANSAC filtering is performed on the keypoints generated in Phase 2.
Both SURF and RANSAC are non-deterministic processes... running each algorithm twice often produces very different results. I've had luck running both ASIFT and RANSAC iteratively up to 100 times, compiling results from each run/filter step. I then run a RANSAC filter over the previously-individually-filtered results which appears to give a better set of correlated coordinates. However, false positives still make their way into the dataset, and an elevation-based approach may provide superior performance.
We run Structure-From-Motion on Antarctic TMA flight-lines to produce a digital terrain model (DTM) (see poster). Running ASFIT on this DTM rather than the imagery itself may improve the performance of this model. HOWEVER, it would mean that this algorithm can only be run on images in which Structure-From-Motion can be performed. Individual images can't do this, only a moving series of images can produce this, which we have for Antarctica but obviously other users may not. We can discuss the decisions being made and the best way to move forward.
To inform the use of the ASIFT executable on large satellite images, I ran a performance analysis to see how the executable scales with respect to image size. This will help inform what size tiles we need to use and what strategies of tiling are necessary. I was particularly interested to see which input image (the "Source" image or "Target" image) was the primary bottleneck for performance, i.e. if one mattered more than the other.
For this test, the code was run on a 12-core Intel Xeon machine with 48 GB RAM, running Kubuntu 16.04. The absolute numbers depend on the machine being used, but the performance scaling should be consistent across platforms and machines. I used output from the "/usr/bin/time -l" to record the execution time ("system" + "user" time) and maximum memory used for each ASIFT run.
I ran three sets of tests, with 10 iterations on each data-point:
- Scale both images up, from 500x square (0.25 megapixels) to 7500x square (56.25 megapixels each).
- Scale the "Source" image from 500-7500 pixels, leave the "Target" image constant at 3000x pixels (9 megapixels).
- Leave the "Source" image #1 constant at 3000x pixels, scale the "Target" image from 500-7500 pixels.
Process memory depends upon the size of the LARGEST image, regardless of Source or Target. At N^0.81 with respect to image size, it scales slightly better than linearly (N^1) but worse than square-root (N^0.5). This is quite good, actually. I know that Aymen wanted to see if he could improve memory usage by hand-recoding the SIFT or SURF algorithms, and perhaps it will, but better-than-linear with respect to memory is already quite a good scaling (my virtual machine maxed out at 48 GB, hence the top cutoff in that plot).
Execution time scaled quadratically (N^2) with the geometric mean of the two image sizes. It didn't matter which was larger, Source or Target. Or to put it another way, it scales linearlly (N^1) with the product of the two image sizes. One could attempt circumventing this by using smaller image tiles, but that itself is an N^2 problem (scaling up the number of Source images tiles that need to be compared with each Target image tile), so in the end it wouldn't help. Thankfully, GPU or high-CPU HPC implementations will help this dramatically since farming out multiple tiles to multiple nodes is trivially easy to parallelize.
The code to generate these is in the "ASIFT/mmacferrin" branch, I can create a pull request to port it over if you'd like.
This doesn't replace the more formal Strong- and Weak-scaling that Rutgers performs on the final algorithms in HPC implementations. This gives me enough information to know how to move forward with a tiling approach since we will need it for WorldView images that range between 100 megapixels to 2 gigapixels apiece. Thankfully the ASIFT 2.1 executable allows for trivially easy tiling for GeoTIF images, it just requires us to do bookkeeping with implementation and the results. I have a clear picture of what needs to be done to do this, and could give a desription to Brad or Aymen if they'd rather pick that up.
An example ASIFT'ed image is posted here, using 2 WorldView satellite tiles over Antarctica's Dry Valleys in 2010 and 2016, respectively, using a 2000x pixel (4 Mpx) tile from each image. The tiles cover roughly the same area, although snow cover & lighting vary between them. Keypoint matches (shown by all the lines between the tiles) are split between "good" matches (matching the same location on each image, indicated by more-or-less horizontal lines) and "poor" matches going all over diagonally. A topologically-consistent version of the RANSAC algorithm will be used to help filter out the "poor" results. There are 790 matches here, only 1-2 dozen are needed to properly pin down an image, so filtering can be aggressive. Also, in regions of low contrast (such as the rock formations in the valley), few matches are found, in precisely the non-changinge areas of the image we'd most like to match, so it will be necessary to contrast-scale the images to hopefully pick out these more subtle features. This will require an order-of-magnitude more processing power (running ASIFT on multiple versions of each source image against multiple versions of each output) but we can discuss how to best scale this up.
Tiles horizontally aligned:
Tiles vertically aligned:
Again, I can iterate with Aymen and/or Brad offline about the best way to iterate forward in the next month. I can move forward either on the tiling and/or on the contrast- enhancements and RANSAC filtering.
- Mike
The past two weeks have been full of much ado about other things. Before leaving for Greenland (4/18), The Boulder team plans to:
- Complete ASIFT Use Case document draft, to more formally flesh-out the diagram I shared last time.
- Re-upload ASIFT source code (my own Git mistake removed those, I'll re-upload).
- Upload a training dataset (images in Alaska).
- Define future testing datasets (primarily Antarctic peninsula)
After 5/28, upon return from Greenland:
- Write Python wrapper around ASIFT C++ module to allow import of other images/data besides PNG, eliminating the dependency on the OpenCV library.
- Define a lightweight performance analysis on memory/CPU growth related to image/tile size on the test images... this will help us define how we need to optimize this when porting to IceBerg infrastructure, and/or a tiling scheme. Already, large tiles break the algorithm, so knowing the memory and CPU requirements for NxM-sized tiles will help.