|Topic:||LCC IDMN Webinar Series – Next Generation Data Integration Challenges|
|Session status:||Not Started (Registration)|
|Session date:||Thursday, August 28, 2014|
|Starting time:||2:00 pm, Eastern Daylight Time (New York, GMT-04:00)|
|Duration:||1 hour 30 minutes|
|Presenters:||Ashley Fortune Isham|
|Description:||Future landscape-scale analyses will rely on more and larger datasets, more computationally intense models, and an ever increasing rate of data inputs and outputs. Integration activities must keep pace with these needs.|
I am researching online mapping services for a class project and found this:
National Wild Fish Health Survey Database.
It runs on PostgreSQL. I don’t know a lot about PostgreSQL beyond that it is open source and competitive with MySQL and other enterprise-level database options. I’ve mentioned before it has an extension called PostGIS that allows excellent handling of spatial data.
So, seeing this application really caught my eye. I think it is a good aspirational model for what we could do with DLIA data. It appears to be all open source. Im copying out text they shared on <http://www.fws.gov/wildfishsurvey/database/page/about>
Major software components:
- MapFish, an open source web mapping development framework
- GeoServer, an open source software server written in Java that allows users to share and edit geospatial data
- MapServer, an open source development environment for building spatially-enabled internet applications
- PHP, a free, widely used, general-purpose scripting language
- PostgreSQL, an open source database management system
- PostGIS, an open source software program that adds support for geographic objects to the PostgreSQL database
- Smarty, a web template system written in PHP
I’ll probably explore this more, but wanted to share it with you now.
I recently ran a model of Hemlock on my personal computer.
I modeled ~489 records; the Nautilus model used over 2,000.
From the model output:
This is a representation of the Maxent model for Tsuga_canadensis. Warmer colors show areas with better predicted conditions. White dots show the presence locations used for training, while violet dots show test locations.
This is not really a fair comparison, but the difference between the models with 489 records and 2000+ is interesting for comparing the predictions.
EECS model for Eastern hemlock is different. It uses more data, and it was run 20 times with 10% of the records reserved before being synthesized into one image.
I will post some additional comparisons for other trees in some later posts.
Environmental layers are available to the public via IRMA.
Source metadata are not available at http://tiny.utk.edu/atbi.
I have attempted to map or cross-walk the layers listed by the Simmerman et. al paper to the names of datasets available for download from IRMA.
Table. Mapping from UTK names to IRMA names.
|1||Soil Organic Type||Soil Classification||https://irma.nps.gov/App/Reference/Profile/2198007|
|2||Topographic Convergence Index||Topographic Wetness Index||https://irma.nps.gov/App/Reference/Profile/2208650|
|3||Solar Radiation Data||30 -m Potential Solar Radiation||https://irma.nps.gov/App/Reference/Profile/2208716|
|4||Terrain Shape Index||30-m Topographic Shape Index||https://irma.nps.gov/App/Reference/Profile/2208684|
|5||Terrain Shape Index||30-m Topographic Ruggedness Index Model||https://irma.nps.gov/App/Reference/Profile/2182017|
|6||Digital Elevation Model||30-m Lidar Digital Elevation Model||https://irma.nps.gov/App/Reference/Profile/2180606|
|7||Slope in Degrees||30-m Lidar Slope Model||https://irma.nps.gov/App/Reference/Profile/2180632|
|8||Understory Density Classes||Understory Vegetation at GRSM||https://irma.nps.gov/App/Reference/Profile/1047499|
|9||Leaf On Canopy Cover||Overstory Vegetation at GRSM||https://irma.nps.gov/App/Reference/Profile/1047498|
|10||Vegetation Classes Vegetation Classification||Great Smoky Mountains NP Vegetation Classification||https://irma.nps.gov/App/Reference/Profile/1021458|
Note: I am grateful to http://www.textfixer.com/html/csv-convert-table.php which made it possible to easily create this table from plain text. I expect to add this to my “toolkit” of useful items and it saved me a lot of time.
Poster presented at 2014 North Carolina Partners in Amphibian and Reptile Conservation Meeting.
- Jessel, Tanner; Super, Paul E.; Colson, Thomas (2014): Spatial Data Diversity Supporting Herpetological Research in Great Smoky Mountains National Park. figshare.
Student’s SIS Advisor’s Name: Dr. Suzie Allard
Name Student’s Practicum Supervisor: Tom Colson; Scott Simmerman
Number of Credit Hours for which you wish to be enrolled in the practicum: 3 hours
Semester during which you wish to be enrolled in the practicum: Spring 2014
Briefly describe prior and/or current information or library work experience:
I am interested in a career in environmental information management,
particularly in a governmental natural resource management agency. My course work to date includes classes in geographic information science,
environmental information management, and data visualization for
Indicate the type of information organization in which you wish to take the
practicum: Environmental Information Management Org
Practicum Location: Great Smoky Mountains National Park; National Institute for Computational Sciences
I would like to develop advanced environmental information processing and
data visualization skills by working with species occurrence records and a
high performance computing environment as part of a technology transfer
project between the University of Tennessee and the National Park Service.
The following four practicum goals and associated outcomes are proposed for
(1) Develop proficiency in running the MaxEnt species distribution modelling
program in a PC environment for determining probability of species
distribution given environmental variables and demonstrate acquired
proficiency by providing training and instruction to Park Service staff in
use of the MaxEnt program on Park resources configured to run MaxEnt.
Training materials and sessions will be produced as an outcome of the
(2) Gain skills with workflow and parallel processing in a high performance
computing environment on a single-system-image supercomputer and demonstrate these skills by generating species distribution models as requested by practicum supervisor. There are currently 540 species models out of ~36,000 species in the park. A collection of new models will demonstrate the outcome of the practicum.
(3) Create documentation for running the MaxEnt model in a PC environment using appropriate technology such as a wiki with walkthroughs, screen captures, or video screencasts as appropriate. A URL will be provided to the final online documentation to demonstrate the outcome of the practicum.
(4) Practice sound data curation principles in managing both model inputs and model outputs by successfully building on the store of models available at . An HPC data management system such as XSEDE (XSEDE.org) will be used to manage the inputs and outputs to demonstrate the outcome of the practicum.
In a rough estimation, I expect to spend about a 1/3 of the required 150
hours learning MaxEnt on PC and HPC environments, 1/3 writing documentation, and 1/3 creating and delivering training (to commence in March, 2014) to enable NPS staff to implement MaxEnt modelling on both PC and HPC platforms.
Submission Field :
Student Comments : Hi Dr. Bishop, I apologize for the late submission. I mistakenly thought I had more time to work on this. I appreciate the opportunity to submit this work with a penalty deduction for lateness. I am a bit concerned about how to incorporate literature, which I did not do. If I need to add in references to literature on online mapping usability, perhaps I could use some additional time this evening? In for a penny, in for a pound after all. Thanks, Tanner
Attached Files : INSC590-GIL-JesselT-UsabilityAssn.pdf
Grade : 52.00 out of 60
This was done well. Future work would benefit from positioning your thoughts within existing literature. In that way, work you do matters to others. I will make it more clear in future versions of the course by assigning more required readings about usability in the GeoWeb week, including the best example from this class, and providing examples of what I mean by referencing literature. I did expect course materials to inform this first assignment (e.g., Harley, Crampton), but I will work better to help connect the dots in future assignments. For example, using another researchers framework would have been one way to do a usability assignment. Also, headings and subheadings help organize a paper, so you may want to use that structure in future work. The instructions also asked for screen shots and I think you could have used more of those. At least one for each application is what I will add to the directions.
Submission ( November 4, 2013 8:58:21 AM EST )
Submission Field :
Student Comments :
Attached Files : IS590GeospatialAssn-Tjessel.docx
Grade : 30.00 out of 30
Attached Files : IS590GeospatialAssn-Tjessel_BishopComments.docx