Blog Archives

INSC 553 Assignment 5: Marketing Plan for NPS Inventory & Monitoring Program

IS 553 Assignment 5 is a marketing plan for service offerings promoting the U.S. National Park Service’s Inventory and Monitoring Program. Includes three service offerings with marketing messages tailored for specific target audiences, promotional activities, and measures of performance.


Data Management Short Course for Scientists | ESIP Commons

Found a “Data Management Short Course” today that I think is worth checking out:

The ESIP Federation, in cooperation with NOAA, seeks to share the community’s knowledge with scientists who increasingly need to be better data managers, as well as to support workforce development for new data management professionals. Over the next several years, the ESIP Federation expects to evolve training courses which seeks to improve the understanding of scientific data management among scientists, emerging scientists, and data professionals of all sorts.

All courses are available under a Creative Commons Attribution 3.0 license that allows you to share and adapt the work as long as you cite the work according to the citation provided. Please send feedback upon the courses to shortcourseeditors@esipfed.org.

via Data Management Short Course for Scientists | ESIP Commons.

Geocoding Historic Homes with Google Fusion Tables

Using data available from Wikipedia concerning historic homes constructed near the turn of 19th and 20th century, I have created a map of structures in Knoxville, Tennessee designed by George A. Barber, an architect.

https://www.google.com/fusiontables/embedviz?q=select+col1+from+1MIXz_F_4LmZVPhtrEdfps6V7Ph9LYjAG4I_z6SQ&viz=MAP&h=false&lat=35.98360461230385&lng=-83.90514483678731&t=1&z=15&l=col1&y=2&tmplt=2&hml=GEOCODABLE

I pulled the data from <http://en.wikipedia.org/wiki/List_of_George_Franklin_Barber_works> as a simple “copy” and “paste” operation into Apache Open Office Calc spreadsheet.

I saved the spreadsheet as a .csv file, comma delimited.

I added a new column and duplicated the street addresses. I deleted the parantheticals surrounding the street address, along with the name of the property.

I deleted the street address and parenthetical in column 1 to retain the name of the property.

After saving the .csv file again, I opened up my personal Google Drive account.

I added the “Google Fusion Tables” application from Google, and then selected “create new fusion table” as instructed in Google’s tutorial.

http://en.wikipedia.org/wiki/List_of_George_Franklin_Barber_works

After importing the data, I ran into some problems concerning the division of street, city, and state.  From “File > Geocode” my “street” was not recognized immediately as a location address.  After changing the “street” drop down in the “Rows 1” view to “location,” I was able to direct the application to geocode based on the street address.

At present time, this is a very basic map.

I do like the ease with which it obtained the lat/long coordinates, and how it transformed the table data into “cards” with the pertinent information in a “pop-up” on the map.

I’m also happy that it can export the resultant geocoded map as KML.

For future work, I think it would be interesting to link a Flickr or other photo management system to the Geocoder.

I also understand it is possible to add a Google Street view image of the particular property.

https://support.google.com/fusiontables/answer/2796058?hl=en

However, it is necessary to obtain the location information in the form of lat/long for this to work.

It is unfortunate that Fusion Tables do not append the lat – long information to the table.

There is software available which can provide this information.

From my course in the Geography Department, I’m aware of this software:

http://ctasgis02.psur.utk.edu/credapopulation/freeware.htm

The application of interest is listed under “Google Geocoder.”

Geocoding with Google Earth is accomplished through two programs: KMLGeocode and KMLReport. The first program reads Excel Worksheets or an XML export of a table from a relational database system and creates a KML file that can be loaded into Google Earth. Once the KML file is loaded, Google Earth will attempt to geocode each entry in it.  After the file is geocoded, it can be saved to a new KML file. This file will contain the coordinates of each

address found. The second program, KMLReport, reads that file and generates two files: one for

geocoded addresses and one for addresses that were not found. The file for geocoded address is written as a comma delimited text file that can be loaded into ArcGIS.

At the moment it seems like obtaining a street view would require me to obtain the lat-long coordinates for the data, the append it to the Fusion Table.

Fusion has some advantages, including automatic publishing to the web, the ability to easily update table data, and for “collaborative data entry.” I can see some potential applications for my neighborhood organization, or any other collaborative group with limited access to mapping technology (especially a library system or other local municipality that does not have thousands of dollars to spend on ESRI software).

IS 590 – Problems in Information Sciences – Scholarly E-Publishing

My summer semester of 2013 includes a 3-credit hour course in Scholarly E-Publishing.  This course provides exposure to an international electronic publishing industry, particularly focused on journal and book publishing, from a world center of electronic scholarly publishing: London, United Kingdom. It offers an intensive series of talks, site visits, and instruction designed to explore how e-publishing is changing both the way scholarly research is conducted and communicated.  Information professionals from Oxford, Cambridge, the British Library, Elsevier, Wiley,  Proquest and more share their unique perspective on scholarly publishing.

Because scientific effort must be clearly communicated and disseminated via scholarly publishing, the course content is of particular interest to the University of Tennessee “SciData” program and is highly relevant to my professional and scholarly goals.  I am particularly interested in understanding how publishers intend to work with open access data repositories such as DataONE, Dryad, or spatial data repositories such as ShareGeo in the UK or EDAC in the U.S.  I am interested in the concept of the data paper, and how a dataset and a data paper might be linked to a publication and shared across platforms with the scholarly community.

The course is a joint venture of University College London Department of Information Studies, the Pratt Institute School of Information and Library Studies in New York City, and the School of Information Sciences at the University of Tennessee.

Given my background in natural sciences (B.S., Ecology and Evolutionary Biology) and entry into the UT School of Information Sciences 2013 cohort concurrently with the 8 SciData Scholars, I was allowed the opportunity to participate in the course.

Along with a blog of reflections on daily course material and the London experience, the course culminates in an individualized research paper.  I intend to focus on the role of data and datasets in scholarly publishing.  The role of datasets in scholarly publishing is most pertinent to my work with the DataONE project concerned with accessibility and preservation of environmental data.

For more on the course, follow the tags “INSC 590 – E-Publishing” or view the syllabus online (SU13-IS590-E-publishing, PDF). The Course Syllabus is also available as a .doc format <http://scidata.sis.utk.edu/sites/all/files/590%20SyllabusTenopir2013.doc>.