Category Archives: Geographic Information Librarianship

INSC 590 – Geographic Information Librarianship

(3) Introduces the concepts related to geographic information librarianship. To understand geographic/cartographic competencies. To master the basic concepts of geospatial data discovery and collection development of cartographic resources. To practice the metadata creation of geospatial. To explore issues related to geographic information policy of GIS related services.



Presentation – Geocoding in Geographic Information Retrieval Systems

I presented this paper at the Geographic Information Systems II (GIS II) session at the 2014 Geography Symposium (See UT Geography Symposium Program 2014)

I represented The University of Tennessee School of Information Sciences at this interdisciplinary conference themed “Mapping outside the lines: Geography as a nexus for interdisciplinary and collaborative research.”

Tanner Jessel, School of Information Sciences, University of Tennessee. “Geocoding in Geographic Information Retrieval Systems.”

Information with a geographic component is among the most valuable and sought after types of information. However, the majority of geographical information exists as indirectly referenced locational information within unstructured text. Even among well-annotated, spatially explicit datasets, existing metadata can be of sparse, inconsistent, or otherwise of poor quality due to time and budgetary constraints. For these reasons, automated annotation of spatially explicit coordinates, a process known as geocoding, is an active area of research in geographic infor- mation science. Research concerning geocoding represents a long-term effort with a body of knowledge that has grown across several decades. Unfortunately, funding cycles are not always long-term, and some groundbreaking technologies and tools are no longer available. The present article attempts to synthesize the current state-of-the art of geocoding and presents a “toolkit” of resources used across the literature to accomplish geocoding, with an emphasis on applications for geographic information retrieval.

Spatial Data Infrastructure Usability Comparison Assignment

Submission Materials

Submission Field :

Student Comments : Hi Dr. Bishop, I apologize for the late submission. I mistakenly thought I had more time to work on this. I appreciate the opportunity to submit this work with a penalty deduction for lateness. I am a bit concerned about how to incorporate literature, which I did not do. If I need to add in references to literature on online mapping usability, perhaps I could use some additional time this evening? In for a penny, in for a pound after all. Thanks, Tanner

Attached Files : INSC590-GIL-JesselT-UsabilityAssn.pdf

Instructor Feedback

Grade : 52.00 out of 60

Comments :

This was done well. Future work would benefit from positioning your thoughts within existing literature. In that way, work you do matters to others. I will make it more clear in future versions of the course by assigning more required readings about usability in the GeoWeb week, including the best example from this class, and providing examples of what I mean by referencing literature. I did expect course materials to inform this first assignment (e.g., Harley, Crampton), but I will work better to help connect the dots in future assignments. For example, using another researchers framework would have been one way to do a usability assignment. Also, headings and subheadings help organize a paper, so you may want to use that structure in future work. The instructions also asked for screen shots and I think you could have used more of those. At least one for each application is what I will add to the directions.


Geocoding Historic Homes with Google Fusion Tables

Using data available from Wikipedia concerning historic homes constructed near the turn of 19th and 20th century, I have created a map of structures in Knoxville, Tennessee designed by George A. Barber, an architect.

I pulled the data from <> as a simple “copy” and “paste” operation into Apache Open Office Calc spreadsheet.

I saved the spreadsheet as a .csv file, comma delimited.

I added a new column and duplicated the street addresses. I deleted the parantheticals surrounding the street address, along with the name of the property.

I deleted the street address and parenthetical in column 1 to retain the name of the property.

After saving the .csv file again, I opened up my personal Google Drive account.

I added the “Google Fusion Tables” application from Google, and then selected “create new fusion table” as instructed in Google’s tutorial.

After importing the data, I ran into some problems concerning the division of street, city, and state.  From “File > Geocode” my “street” was not recognized immediately as a location address.  After changing the “street” drop down in the “Rows 1” view to “location,” I was able to direct the application to geocode based on the street address.

At present time, this is a very basic map.

I do like the ease with which it obtained the lat/long coordinates, and how it transformed the table data into “cards” with the pertinent information in a “pop-up” on the map.

I’m also happy that it can export the resultant geocoded map as KML.

For future work, I think it would be interesting to link a Flickr or other photo management system to the Geocoder.

I also understand it is possible to add a Google Street view image of the particular property.

However, it is necessary to obtain the location information in the form of lat/long for this to work.

It is unfortunate that Fusion Tables do not append the lat – long information to the table.

There is software available which can provide this information.

From my course in the Geography Department, I’m aware of this software:

The application of interest is listed under “Google Geocoder.”

Geocoding with Google Earth is accomplished through two programs: KMLGeocode and KMLReport. The first program reads Excel Worksheets or an XML export of a table from a relational database system and creates a KML file that can be loaded into Google Earth. Once the KML file is loaded, Google Earth will attempt to geocode each entry in it.  After the file is geocoded, it can be saved to a new KML file. This file will contain the coordinates of each

address found. The second program, KMLReport, reads that file and generates two files: one for

geocoded addresses and one for addresses that were not found. The file for geocoded address is written as a comma delimited text file that can be loaded into ArcGIS.

At the moment it seems like obtaining a street view would require me to obtain the lat-long coordinates for the data, the append it to the Fusion Table.

Fusion has some advantages, including automatic publishing to the web, the ability to easily update table data, and for “collaborative data entry.” I can see some potential applications for my neighborhood organization, or any other collaborative group with limited access to mapping technology (especially a library system or other local municipality that does not have thousands of dollars to spend on ESRI software).

“Racial Dot Map” Visualization Discussion

My assignment in Geographic Information Librarianship is to find, read, and be ready to discuss a peer-reviewed GIS related article for class.

I had initially seen a GIS topic of interest come to my attention via my daily browsing for nuggets of information on social media – in this case – Facebook.

Someone had shared a “dot map” showing population data for the United States based on census data. Each dot on the map represents one person, and all of the dots are color coded to represent race. Keep in mind these are approximations of race – if you zoom in on my house, you won’t see me exactly, but you’ll see a “representative” of my census block.

This had been done previously (See the “Census Dotmap” at ), but by integrating additional datasets to “guesstimate” the population density by census block, an enhanced visualization was made possible.

The original article was linked on “” which had the inset text proclaiming “This is the most comprehensive map of race in America ever created.”

Here’s the original article:

What’s fascinating to me is that the representation visualizes 7 gigabytes of data.

Now the blogosphere and media is abuzz with this, but I need a peer reviewed article.

So, while the dotmap has it’s own Web page, I am turning to an earlier study that is cited as the “inspiration” for the more recent study.

The study was peer reviewed by the Advisory Board for the US2010 project. The report is entitled “The Persistence of Segregation in the Metropolis: New Findings from the 2010 Census” and can be downloaded online: .

In this, report, 2010 census data suggests that desegregation is a slow process, and growing hispanic and asian populations are “as segregated today as thirty years ago.”

Because I live in a “typical” black neighborhood with 40% whites, this item of analysis caught my attention:

“Yet another factor is the difference in the quality of collective resources in neighborhoods with
predominantly minority populations. It is especially true for African Americans and Hispanics
that their neighborhoods are often served by the worst performing schools, suffer the highest
crime rates, and have the least valuable housing stock in the metropolis.”

A spatial analysis with census data showing demographics, income, and community resource can be useful for city administrators when making decisions about how to allocate funds. Perhaps I am being naive in hoping for a political world governed by data-driven decisions, but the technology nonetheless exists to do so.

This kind of decision making is a way to ensure that resources are distributed equitably.

However, the value of the “dot map” is clear in reviewing this paper, as much of the data is presented in tabular form, without any spatial visualization. Spatial visualization can enhance the experience of absorbing the data and intuitively understanding what it means. The example of 8 mile road in Detroit with a clear dividing border between black and white communities is a very clear representation of the difference between the two data reporting options.