-
Notifications
You must be signed in to change notification settings - Fork 0
Arizona Activity
Brian finished a downsampling script that takes ground data and converts it to something comparable to satellite data. This is described in a bit more detail in previous updates. Helen continued working on the classification script and met with the CI team last week to work through some bugs. The current snags are on converting an 8 band image into a 2D array. It is still TBD whether the remaining code will run the easy classification and shapefile conversion. The goal is to separate these into two different scripts eventually (classification and shapefile conversion).
Helen, Brad, and Brian worked on the classification script. It follows most of the same code as the scripts in the cal pipeline. The main difference is instead of outputting an 8-band tif image, it outputs a 2D array of values associated with three different general landcover classes (snow/ice, geology, and water/shadow). These classes are based exclusively on brightness by summing all bands and use arbitrary threshold values in the meantime. This raster ideally can be used to create a polygon map. The major problems will be adding multiple conditions into numpy.where functions which are more equipped to handle to different distinctions. Also, once we start getting more detailed where there may be overlap amongst the geologic units, creating that final array will be a bit more challenging. It is more likely we will end up with multiple layers each corresponding to one specific class. Perhaps there will be a script/function per parameter. Brian has also been working on downsampling the polar rock repository data which will help us create new parameters. We also submitted a new pull request for the update to our atmospheric correction pipeline (where there is a single spectrum as a backup correction).
Helen and Mark met with Brad about where to start on the classification script and have a good direction & outline for upcoming priorities. Brian tackled some little bugs including reliance on folder structure in the image directory and accessing the library folder files (e.g. earth_sun_distance). Helen added a temporary atmospheric spectrum (derived from a Dark Object Subtraction run of one image) to lib and Brian wrote it into the specmath script as our new backup. So instead of converting images to top of atmosphere reflectance if there is no DOS-R text file, it carries out the same algebra with the temporary atmospheric spectrum in library. This will translate easily to our pipeline for a lookup table. It also means all of the preprocessing pipeline is ready to test with future classifications. Next, we will be working through the classification script in order to give Rutgers an end to end pipeline for testing. We will then resume with all previous work (machine learning, look up table atmospheric correction & manipulation of polar rock repository for classification parameters) to improve the accuracy.
Received and started processing updated imagery from PGC (really excited for the new Fridge release). Talked to Ehsan about trying machine learning for our atmospheric correction and am in the middle of putting together some data pairs of TOA & ground truth to pass along for this. On the much simpler side of things, Helen also reached out to the USF group about their simpler rayleigh only correction to see if that would be sufficient with the lack of aerosols. Brian is tackling some small fixes to our old scripts (e.g. directory folder reliance etc.) and is putting together a script to downsample ground data so he can start combing through the polar rock repository. We will use the rock repository to supplement the data Mark and Helen collected in the field for machine learning tests and Brian will be carrying out his own project investigating spectral trends in the polar rock repository samples. This will contribute greatly to more parameters to identify & characterize rock types. Helen is also working with USF to get a working group together to tackle the coding up of attribute classes to make a landcover product. Still toying with the idea of whether a vector or raster file will be better and trying to understand all the jargon associated with the Python methods. Luckily, the GRFP for Helen will be out of the way by Monday and more time can be put into this problem (let's set up a time to check in about this, Brad?).
Continued processing lookup table imagery to derive DOS-R atmospheres. Helen has started comparing these to start separating classes of different atmospheric conditions based on elevation, latitude, etc. The user workshop was a success! Tested landcover pipeline on Bridges and practiced using terminal access. Made some connections to pursue other potential atmospheric correction techniques. Helen created a plan for easy classification in order to work through full landcover pipeline and troubleshoot final product. Helen and Brian are touching base about coding that up so we can test it on bridges.
Finally able to get the PGC order through from their new FTP system to start working on the atmospheric lookup table. All preliminary lookup table images have been converted to radiance and the next step is to manually collect spectra to derive the DOS-R derived atmospheric averages for each image. Ideally we can use these to help constrain image classes that have similar atmospheric behavior. We will put these values into a lookup table that will be accessed once the image class is determined. The remainder of the pre processing steps will follow (atmcorr_specmath.py & refl.py). Helen added the gain and offset corrections to the rad.py code (something brought up by DigitalGlobe) and checked the performance and committed it to the repo. Brian is back for the semester! He will be helping put together the code for the lookup table in addition to pursuing a side project that will contribute to our future spectral library. He will be using the polar rock repository to pull out trends of downsampled spectra that can be used for multispectral parameters in the WV-2/-3 wavelength range. Helen also made some figures comparing different atmospheric correction methods to ground data which helps visualize the comparisons and where each model performs best. User workshop presentation is in progress!
Initial comparisons of DigitalGlobe Atmospheric Compensation Images appear to perform worse than our manual Dark Object Subtraction & Regression method (cool!). This is a nice justification for why we are developing the atmospheric method we are working on. Access to imagery is back! Helen is working on getting the scripts to run again to work through look up table imagery. Mark and Helen discussed funding, timelines, and deliverables for the next one-two years. Helen continues to work on investigating spectral parameter candidates to be used with WV2/WV3 data.
Helen has requested three images from DigitalGlobe to be corrected with AComp for atmospheric correction comparison. Access to imagery is temporarily delayed until early next week due to malfunctions with our current equipment. Once restored, Helen and Mark can continue to process the atmospheric correction look-up table. Helen has coded the update into the rad.py script and is just waiting to test the change on some images before committing it to the repo. Helen continues to work on investigating spectral parameter candidates to be used with WV2/WV3 data. Helen and Mark have also begun preparations for the upcoming user workshop. Finally, Helen won gold with Team USA last month in Germany!
Heather and Mark discussed final deliverables for the land cover group, including both raster and vector classification products. They have also been in touch with the PGC to determine whether a SpecMap-like web interface is the best way to go. Helen and Mark continue to work on the atmospheric correction look-up table. Helen has started to dive into the spectral data obtained from the Polar Rock Repository to get a feel for the variability in spectral signatures with different rock units, in preparation for generating spectral parameters to be used with WV2/WV3 data. Lastly, Helen is currently competing at the U24 Worlds Ultimate Tournament for Team USA! Good luck Helen!
Helen has started putting together a short white paper for justification, background, and existing work on spectral parameters. This will serve as a jumping off point for developing the parameters to be coded into the infrastructure this fall and will be able to be incorporated into future paper submissions. The images for AComp correction were submitted to DG for processing. We are having some issues with some of the imagery from the PGC coming in different formats and it has tripped up the code (return of the bigTIFF error). Once some separate network issues have been solved, it will be possible to figure out what is different about the new images so we can move forward with the lookup table progress. Additionally, there is a new component that will need to be added to the radiance equation. This shouldn't be horribly complex and Helen may be able to manage adding this in herself. However, in order to maintain current, accurate processing of future imagery, it may be necessary to add an additional scraper algorithm to search for the release of new corrections by digital globe for any future imagery that may be used with ICEBERG.
An initial image order was placed to the PGC to test the lookup table classes for atmospheric correction. Helen attended the Earth Cube Annual Meeting with Brad in Denver. A Charles Martin from NCAR recommended looking at the Concordiasi project data set that sent long-duration stratospheric super-pressure balloons from McMurdo, 12 of which measure dropsondes to measure atmospheric parameters. Additionally, one of the conference attendees, Dexuan Sha (dsha@gmu.edu) is working on HSR cyberinfrastructure for arctic sea ice and would be a great potential user for ICEBERG. Otherwise, the poster was well received by the ECAM community. Helen also met with DigitalGlobe's principle scientist in corporate R&D on Friday after the conference. The main takeaways were some insight into some additional radiance corrections, how some other atmospheric corrections work, and the bidirectional reflection distribution function. Additionally, it is now possible to request some AComp corrected images from him for free to compare to our own methods.
Mark and I have began working on a possible new direction for the atmospheric correction automation. Instead of automating the dark object subtraction and regression method itself, we will take a page out of the ENVI FLAASH atmospheric correction model which uses a look up table. We have created a ranking system based on the quality of image attributes such as cloud cover, sun angle, image capture date, etc. By narrowing the image database down to the highest quality images for correction, we further divide them into maritime or mountainous categories along with elevation and latitude resulting in about 40 total classes. From these classes we will hand select sample images to correct with dark object subtraction and regression (a process now expedited by the existing automated work flow we've developed this year). The result will be an average atmospheric spectra for each image class. We will then need to test the accuracy of these results on the lower quality images and determine if any changes should be made to the classes. The result will be an atmospheric correction automation that identifies the attributes of a given image and places it into a specific class used to access a lookup table of atmospheric spectra values to remove from the scene using the atmcorr_specmath.py script. This is a much simpler automation approach and would require minimal changes to the existing workflow (different input for atmcorr_specmath.py and no more need for atmcorr_regr.py). Ideally this can still be a dynamic correction that can be continuously improved upon by ground measurements. Aside from this, Helen has been working on the poster and lightning talk for the EarthCube meeting and adjusted some minor bugs in the scripts with Brian. The comparisons of DOS-R, FLAASH, and QUAC thus far seem to heavily favor the DOS-R method.
Submitted a new pull request for the atmcorr_specmath.py and atmcorr_regr.py scripts to be pushed to devel. Both are functional. At this time, we have every pre processing step aside from ROI identification and spectra collection automated. All small issues regarding the date lookup for the refl.py script and remaining bugs in atmcorr_specmath.py have been fixed. A new branch was created called "experiment_ROI_detection" where the k-means experimentation script was moved in order to merge all of the finished scripts in "experiment/atmcorr" to devel without any discrepancy. We are continuing to look into ways to automate ROI detection using either k-means or principle component analysis. Spectra collection may be possible by creating a polygon and exporting all pixel values as an ascii. We will also be looking into the polar rock repository's spectral library to start developing spectral parameters to fit with the pipeline outlined in our use case report. Helen is returning to investigating QUAC and FLAASH for atmospheric correction comparisons. DigitalGlobe was able to remedy the exposure issue - essentially there was just a bad batch of images where they had this problem and they will be replacing them for me. I am hoping they will send a few corrected with their AComp software so I can compare this model as well to our method. Atmospheric correction will be the priority leading up to the EarthCube All Hands Meeting (considering this is what Helen will be presenting on). Spectral parameters will be the priority after the meeting is over. Brian will be doing an reu in California this summer and will be rejoining us in the Fall. This gives Helen enough time to develop the parameters we hope to automate for classification so we can sprint through script development during the Fall semester. Helen got an XSEDE account.
Finally pushed rad.py, refl.py, and earth_sun_dist.py to devel! All are running successfully on bridges with the proper outputs. We are now just adjusting the date lookup to access the xml file instead of the user folder name. Brian and Helen were able to sort out the last few bugs to get his atmcorr_specmath.py script to run. This script applies atmospheric values derived from a worksheet created by atmcorr_regr.py to the image and outputs an atmospherically corrected image. We will be submitting another pull request for these two scripts today. All that is left of the automation is the Region of Interest (ROI) identification and spectra collection to use for the atmcorr_regr.py script. Helen is in contact with DigitalGlobe about exposure issues in our image inventory. Helen has also downloaded GitHub desktop to more efficiently collaborate on the scripts. All computer issues have been sorted and we are moving forward with atmospheric correction model comparisons and parameter testing.
Computer is finally operational and we are now collaborating to edit code with the ability to constantly test through command line. Helen has begun tracking changes and issues through Git and is adjusting to the proper way to push commits to avoid conflicts (oops). rad.py is done and appears to be compressing down to 4G (on par with manual run), but spits out a few file size errors in the process. Run time for rad.py is 2 min 49 sec/image. refl.py is now done and compresses down to 3 G but inverts the image. Run time for refl.py is 3 min 35 sec/image. Helen has also edited the file structure and ReadMes in Brian's branch to be more up to date. Brian has finished the spectral math script that applies atmospheric correction values to an entire radians image to correct it before converting it to reflectance. We have stalled slightly on other progress due to some different computer issues preventing the use of ENVI. Hoping to fix this soon to continue checking images and working through parameters. Mark and Helen will begin investigating potential to use WV-3 SWIR data to improve parameter accuracy. Helen reached out to DigitalGlobe and is now in contact about the imagery exposure problems encountered, hoping to secure a visit too. Helen also reached out to a contact at UC Boulder at the recommendation of Mike Willis to look into Digital Globe's atmospheric correction model for comparison. NASA/USAID SERVIR have still not responded to the follow up on the user workshop, but Mark has some other people in mind to reach out to once we reach the "deadline" we had set for them to RSVP (May 1). Does Rutgers still want a sample image or updated workflow chart in addition to our presentation?
So close to finishing the rad & refl pull request, but having major issues testing it on NAU lab computer because of environment issues. Helen's abstract was accepted in addition to receiving the travel grant to attend the EarthCube All Hands Meeting. She will be presenting on atmospheric correction comparisons as an application for ICEBERG. Recent developments with DigitalGlobe might take a visit off the table, especially pertaining to DG AComp access. Nonetheless, we have a few contacts we will try to at least address the image exposure issues we have been noticing. Mark is back in Flag! We are now working through the classification half of landcover and putting together a "toy" example and work flow for the remainder of our use case. Sent email invite to NASA/USAID SERVIR, but haven't received a response yet, following up today.
Brad, Brian, and Helen continue to work through automating atmospheric correction. Mainly, cleaning up the existing scripts and git structure before tackling the atmospheric correction steps. This includes finishing the development on the reflectance script, cleaning up the rad script so it can be pulled, and adjusting tiling & compressing. Helen submitted an abstract to present a poster at the EarthCube All Hands Meeting in Denver, CO in July. The poster will focus mostly on the work with atmospheric correction and comparison in the Antarctic because that is what we are aiming to present at the user workshop. We also hope to tack a visit to DigitalGlobe to discuss some imagery exposure issues and use their AComp software and UC Boulder to discuss overlaps with the ASIFT team. The PGC sent over Aitcho Island imagery (site of ground data from the field) and we acquired licenses to FLAASH and QUAC finally so progress on atmospheric correction comparisons in Flagstaff (as a control) and Antarctica has resumed. We are also working through the post processing of the ground data and automating these procedures as well.
Had a conference call with Brian and Brad to work through current issues for automatic atmospheric correction development. Investigating issues with ENVI/Python disagreement and possibility of k-means classification for region of interest selection. Set up tasks for upcoming weeks to address file compression, unification of tiling algorithms, snow mask code, aforementioned issues, and starting the development of the spectral math step. There is a brief powerpoint in the Landcover section of the google drive outlining these issues and tasks. We are also starting to think about which users would be appropriate for the workshop on the landcover front and addressing the questions in the project update template. New imagery was requested from the PGC for Aitcho island where Helen was able to collect data and hopefully will begin the process of offloading this data and comparing it to the images and assessing the accuracy of the correction thus far.
Caught up with Brian on the reflectance script. Posted to Git to get feedback from Brad because we’ve been having trouble running the script on our own devices. Investigating k-means classification.
Helen is back in Arizona and Mark will return in a few weeks. Helen was able to collect spectra of snow algae (to compare to penguin guano) and multiple colored landcover surfaces at Aitcho Island for atmospheric correction calibration. Prior to leaving for Antarctica, Helen had a productive visit with Brad and Heather at Stony Brook. We were able to successfully implement a snow mask and begin working through issues with the Dark Object Subtraction and Regression automation algorithm. We are exploring the possibility of using K-Means classification to cluster similar spectra into classes for both atmospheric correction and later for unit classification. The biggest obstacle seems to be tying the spectra to zero at band 8 in order to remove the variation in brightness before clustering. We also hope to use Hieu's shadow algorithm to mosaic a fully non-shadowed Antarctica using images from different times of day for each location. We anticipate the completion of the atmospheric correction automation by the April release. Brian continues to work on the reflection conversion script.
Mark and Helen both in the field
Radian conversion and atmospheric regression scripts updated in git with batch processing ability. Working on the reflectance conversion and better directory management (looking into PGC's code). Lots of imagery troubleshooting...working with Polar Geospatial Center on some errors appearing in imagery reflectance values and acquiring other atmospheric correction models to compare with DOS-R method on Flagstaff and Antarctic imagery. Also exploring options for shadow masks and scene flattening using DEMs. Preparations for field season.
Working on local Flagstaff control study for atmospheric corrections in order to compare different methods with accessible ground truthed data. Began Antarctic field preparations. Looking to test the radian conversion script and make it able to batch process a group of imagery.
Posted half the benchmark images for testing to the drive. Brian (undergrad) cleaned up atmospheric correction script to share soon. Began comparing atmospheric correction results within DOS-R method (same image with different users-Mark & Helen). Helen feels caught up with github, project activity, and how to navigate platforms! Discussed the possibility of December visits to Polar Geospatial Center and Stony Brook meetup for Helen. Will continue to post benchmark imagery with new priority focus on imagery from field locations and begin fieldwork prep. Will also test/compare shadow masks and continue to assist in preprocessing automation steps.
We welcomed a new undergraduate physics & astronomy major/computer science minor, Brian to our team! Created a python script for the atmospheric regressions relative to band 8 to average atmospheric scattering effects: reads in groups of .txt files from separate photo image folders and returns output file in each image folder with values needed for image correction. Looking into atmospheric correction models and how the accuracy compares to our manual method. Looking into IDL and automation for raw image>T.O.A. image and atmcorr image>surface reflectance image.
Working through atmospheric correction techniques, including both image-derived and model-based techniques. This might be a good topic to discuss in the not-so-distant future, considering its ubiquity in characterizing (and quantifying) the spectral properties of surfaces.