Invisible Nation: Mapping Sioux Treaty Boundaries

This post is based on a talk I gave at the Nelson Institute Center for Culture, History, and Environment Symposium at UW-Madison on February 11, 2017. Here are the slides from the talk:

Last fall I made a map of the Dakota Access Pipeline’s Missouri River crossing that went viral. One part of the map drew especially vigorous discussion in the comments section of the original post: the area labeled “Sioux Territory Under 1851 Treaty of Ft. Laramie.” Because of that discussion, I changed the label from its initial wording, stripping the word “Unceded” from the beginning. I won’t change it back; for now, this keeps the map on sound legal footing. But here I will explain why I think a strong argument can be made for the original wording, and what it portends for Sioux sovereignty over this contested landscape.

While conducting research for the Black Snake map in mid-October, I connected with a staff member at the Standing Rock Tribe’s Historic Preservation Office who gave me information I could use on the map in exchange for some light GIS assistance. As I was packing up to head back to the Oceti Sakowin camp, this person asked me if I would be willing to digitize and map the boundaries of the Great Sioux Nation.

My response was: The what?

The Great Sioux Nation is a geographic entity that came into existence with the 1851 Treaty of Fort Laramie. Intended to halt Indian attacks on white settlers moving west along the Oregon Trail, the treaty for the first time designated territorial boundaries of the Sioux, Cheyenne, Arapaho, Assiniboine, Mandan, and Arikara peoples. To the Native Americans who roamed the plains as nomads hunting buffalo for subsistence, these boundaries were always in flux, determined by who could best their rivals in counting coup. To the Americans, fixated on expanding their nation-state, defined territory and property boundaries were necessary prerequisites for legal ownership and use of the land. Thus, these boundaries were largely fictitious, but a legal fiction that would bear some fruit in the white man’s court over a century later.

1851-treaty-map-optimized
Map of 1851 treaty boundaries from North Dakota Studies website.

Of course, hostilities between the U.S. and the Sioux in particular did not cease in 1851. In 1868, after a series of skirmishes known as Red Cloud’s War, the U.S. signed another Treaty of Fort Laramie with the Sioux and their allies. This treaty was negotiated shortly after Red Cloud’s warriors massacred 81 American troops under Captain William Fetterman, who the Indians saw as trespassing in the contested Powder River Country (they were protecting white gold prospectors using the Bozeman Trail, whose presence threatened the buffalo herds, the main form of subsistence for the western Sioux). In the wake of the Civil War, the U.S. Army calculated the cost of militarily subduing the proud Sioux as too high, and sued for peace.

The 1868 treaty contained three sections defining Sioux lands. Article 2 of the treaty designated a “Great Sioux Reservation,” a place where, in theory, Indians who wanted to assimilate could settle into the lifestyle of yeomen farmers. Article 16 recognized Sioux hegemony over the vast Powder River Country, labeling it “unceded Indian territory” and closing American forts and the Bozeman Trail. Article 11 recognized the rights of the Sioux to hunt south of the reservation “so long as the buffalo may range thereon in such numbers as to justify the chase.”

1868-treaty-map-optimized
Map of 1868 treaty boundaries from North Dakota Studies website

These treaty boundary maps were easily located on the internet, but were too small-scale to digitize accurately. I figured it wouldn’t be too hard to use the treaties themselves to draw in more accurate boundaries, or, failing that, find the sources used by the online maps. As I was leaving the Standing Rock office, I said that I should have something done by the weekend. Boy, was I off. I was staring down a rabbit hole.

The first source I went to, of course, was the treaties themselves. Two problems quickly became apparent: in some cases, the treaty language was very vague and didn’t match up with on-the-ground geography, while in others, the boundaries on the online maps obviously disagreed with what what the text of the treaties said.

Exhibit A: Here are the Sioux boundaries as defined in the 1851 treaty:

1851-sioux-territory

The problem lies in this part: “…to a point known as Red Bute (sic), or where the road leaves the river; thence along the range of mountains known as the Black Hills…” Red Butte is just west of modern Casper, Wyoming. The Black Hills—at least as we know them today—are at closest 150 miles to the northeast.

RedButte-BlackHills_map.png
Red Butte and the Black Hills are separated by 150 miles of nondescript plains. Screenshot from ArcGIS Online.

So how does one get from Red Butte to the Black Hills? The boundary on the online maps does not head northeast from Red Butte at all, but rather continues west along the North Platte River, seemingly in violation of this treaty language (otherwise, why would Red Butte be marked as a waypoint on the boundary line at all?).

1851-treaty-map-red-butte

Exhibit B: Article 11 of the 1868 Treaty states that the Sioux “reserve the right to hunt on any lands north of North Platte, and on the Republican Fork of the Smoky Hill River [now just called the Republican River]” as long as there are buffalo to hunt. But the two rivers are widely separated by a seeming no-mans-land, and what exactly does “on” the Republican River mean, anyway? Again, the maps have an answer, but it’s not a very satisfying one:

1868-Treaty-map-republican-river.png

(Note that the above map mislabels the southern hunting grounds “unceded Indian territory;” the treaty does not use that term for this area, nor have the courts that interpreted it).

Exhibit C: Article 16 of the 1868 Treaty vaguely defines the “unceded Indian territory” of the Powder River Country as “the country north of the North Platte River and east of the summits of the Big Horn Mountains.” This was well and good for the time the treaty was signed, when everyone knew that the problem was the Bozeman Trail and the string of forts the U.S. had erected to protect it. As a cartographer, I’m fascinated by the problem of symbolizing vague and poorly defined boundaries; there are plenty of ways to show land claims without drawing a hard line around them. But that is exactly not what the online treaty maps do. There are very definite lines drawn around this area; some segments obviously follow rivers, while in other spots the underlying geography is not obvious, but the border clearly follows something. Then there’s that big finger-shaped hole in the territory that juts down from the north like a peninsula; what’s that about?

1868-treaty-map-west

So I had to figure out where these boundaries were coming from. That took me to the book Black Hills, White Justice by Edward Lazarus, which appears to be the original source of the online maps. Lazarus is the son of one of the attorneys who prosecuted a pair of Sioux cases against the U.S. to gain compensation for usurped treaty lands, including the sacred Black Hills. The Black Hills case went all the way to the U.S. Supreme Court, which sided with the Sioux in 1980 and awarded them over $100 million for the unilateral “taking” of the Hills by Congress in 1877 (an episode of which a lower court famously declared, “A more ripe and rank case of dishonorable dealings will never, in all probability, be found in our history”). The other case called for remuneration for land rights ostensibly relinquished under the 1868 treaty, and eventually won the Sioux an additional $40 million.

By the time the Sioux won these cases, they had been in court for almost six decades. The intervening years saw a revolution in Native American attitudes toward their own sovereignty. Fearing that monetary compensation would absolve the U.S. government of any responsibility to give back the land they had stolen, the Sioux have refused to touch the money. Today it sits collecting interest in a BIA-administered trust now worth well over $1 billion.

bookscancenter_1
Map of 1851 Sioux territory in Black Hills, White Justice by Edward Lazarus

Lazarus’s book is a splendid bit of storytelling, and vividly recounts the whole convoluted history of U.S.-Sioux legal relations, from the fights that led to the 1851 treaty, to the theft of the Black Hills, to the high court’s ruling on the Hills and the Sioux response. But Lazarus is a lawyer, not a cartographer, and his primary focus is on the shifting jurisprudence around Indian legal rights and treaty claims, not on the geographic boundaries of those claims. Nonetheless, his book at least pointed me in the right direction: toward the Indian Claims Commission, a special court created by Congress in the New Deal era to adjudicate outstanding Indian treaty claims.

In the act that created the ICC, Congress gave the Commission the power not only to interpret Indian treaties, but to literally redraw disputed treaty boundaries and then make decisions as if they were the original boundaries. And, it turns out, this is just what the ICC did in regards to the 1851 and 1868 Sioux treaties.

In 1965, the Commission took up the issue of the 1851 Sioux territory’s western boundary (the part that went from Red Butte “along the range of mountains known as the Black Hills”). In its decision (15 ICC 577), the Commission thoroughly reviewed the background and tenor of the negotiations that led the treaty, including the boundary lines drawn on the (highly inaccurate) maps made at the time, and concluded “that the proper location of the Sioux western boundary… follows the drainage divide between the rivers flowing east into the Missouri and those flowing north into the Missouri.” This seemingly logical conclusion produced this territory:

1851_territory_1965_w_labels
1851 Sioux Territory as determined by the Indian Claims Commission in 1965. Map by the author.

But it didn’t make the lawyers for the Sioux happy. The problem, they pointed out, is that the new boundary left a large “neutral zone” between the Sioux and their western neighbors and bitter rivals, the Crow. The eastern Crow boundary specified in the treaty follows the Powder River rather than the “Black Hills” used by the Sioux boundary. The attorneys argued that the land in between rightfully belonged to the Sioux under the terms of the treaty—and should be compensated for. In 1969 (21 ICC 371), the Commission agreed, nullifying its previous boundary and redrawing the 1851 territory thus:

1851_territory_1969_with_labels
1851 Sioux Territory as determined by the Indian Claims Commission in 1969. Map by the author.

To determine the exact line of the western boundary, the ICC relied on a set of maps of Indian land cessions drawn by Charles C. Royce for an 1896 report to the Smithsonian Institution. The 1851 Sioux boundary eventually settled on by the Commission follows the line between area 517—land ceded by the Crow in a separate treaty—and area 597, or what Royce identified as Sioux territory supposedly ceded by the illegitimate 1876 “agreement” cited by Congress to annex the Black Hills.

royce_wyoming1
Indian land cessions map by Charles C. Royce, 1896

Royce produced maps like this for every state (more than one for some states), and luckily they have all been scanned at high resolution and are now hosted online by the Library of Congress. They were the last piece of my puzzle. They are based on the Public Lands Survey and highly accurate for the time. With a little bit of georeferencing work in GIS software, they gave me something to trace where no rivers, graticule lines, or modern political boundaries otherwise demarcated the ICC’s treaty boundary line.

The geography of the 1868 treaty is a bit more complex than the 1851 territory, and remains contested. The latter treaty is especially important to modern Sioux sovereignty because it is regarded as the last legitimate treaty between the U.S. and the Sioux. The sell-or-starve “agreement” giving up the Black Hills in 1876 was signed by just a handful of war-broken chiefs, violating a key provision of the 1868 treaty stipulating that it could only be superseded by signature of 3/4 of the adult male Sioux population. This “rank case of dishonorable dealings” was cited by the Supreme Court as a key factor in its 1980 decision.

To review, the 1868 treaty declared three types of Sioux lands: the Great Sioux Reservation, on which the Sioux could permanently settle (not that they wanted to); hunting grounds available for Sioux use as long as they contained free-ranging buffalo (which were decimated within a few years of the treaty’s signing); and unceded Indian territory, where the unconquered western Sioux could roam and hunt in the lifestyle to which they were accustomed without harassment by U.S. soldiers or fortune-seekers. The treaty language is quite clear on the boundaries of the reservation, which basically covers the western half of what is now South Dakota along with slivers of North Dakota and Nebraska. The other two treaty areas are less clear.

In 1970, the ICC rendered a pair of decisions drawing boundaries around the southern hunting grounds and western unceded territory (24 ICC 98 and 23 ICC 358, respectively). These decisions were clearly somewhat arbitrary, drawing solid lines where perhaps there should not have been based on the post-1868 land cessions on Royce’s maps. Importantly, and against the arguments of the Sioux counsel, the Commission arbitrarily decided that the eastern boundary of the unceded territory—”the country north of the North Platte river and east of the summits of the Big Horn mountains”—aligned with the western boundary of the reservation on the 104th meridian (104º west longitude). This decision excluded the 1851 Sioux territory to the north of the reservation, which then fell under Article 2 of the 1868 treaty, relinquishing all Sioux claims to territory outside of the lands described by the treaty.

1868_Territory_1970_with_labels.png
Sioux lands under the 1868 Treaty of Fort Laramie as determined by the Indian Claims Commission in 1970. Map by the author.

But this isn’t the end of the story. In a 1978 decision(42 ICC 214), the Commission again modified its earlier findings, declaring that, based on the tenor of treaty negotiations, “the Indians cannot have regarded the 1868 treaty as a treaty of cession.” This would tend to suggest that the 1851 Sioux territory to the north of the reservation should be included in the unceded Indian territory that the Sioux did not believe they were giving up. Thus:

1868_territory_1978_with_labels
Sioux lands under the 1868 Treaty of Fort Laramie as suggested by the Indian Claims Commission in 1978. Map by the author.

So the Great Sioux Nation, then, could be constituted by the Great Sioux Reservation and unceded Indian territory to the west and north of the reservation, as designated by the 1868 treaty and interpreted by the ICC, and possibly 1851 territory within the hunting areas to the south of the reservation. Why is this important now? It is the ambiguously designated northern segment of 1851 Sioux territory through which the Dakota Access Pipeline—the Black Snake—passes.

dapl_routes_map_web
Between the Heart River and Lake Oahe, the Dakota Access Pipeline passes through 1851 Sioux territory. Map by the author.

In an article published last month by the Indian Country Media Network, a University of New Mexico Ph.D. student, Nick Estes, and a University of Oregon history professor, Jeffrey Ostler, argue based on the 1978 ICC decision that the pipeline passes through territory never legally ceded by the Sioux. Thus, they say, it is especially imperative for the government to respect the tribe’s right to consult on and even veto dangerous infrastructure projects in the area (I would add, particularly given that the original intent of the “unceded Indian territory” was to check the growth of American infrastructure). Their exact claim—that the ICC declared “the northern boundary of the unceded Article 16 lands was the Heart River”—I can’t find support for in the wording of any ICC decision. But I agree with the logic of their argument, and I certainly think a court should definitively nullify the arbitrary and ahistorical 104º eastern limit of the unceded territory.

Would a modern court recognize that the Sioux maintain treaty rights in the 1868 unceded territory, including this northern zone? And if so, what would the nature of those rights be? In the 1990s, the Ojibwe tribes of northern Wisconsin won (back) their treaty rights to hunt and fish off reservation in ceded territory. But the idea that Native American treaty rights could include veto power over infrastructure and industrial development has not, to my knowledge, ever been tested in court. Given that the Standing Rock Sioux Tribe is fast running out of other legal avenues to block the Black Snake, perhaps it’s time to test it.

Note: My maps of Sioux treaty boundaries use state boundary and terrain data from the US Geological Survey and satellite imagery data from Digital Globe via ESRI. All maps in this post authored by me are licensed CC-BY.

 

Advertisements

A #NoDAPL Map

When I decided to become a cartographer, I didn’t just want to make pretty and useful maps. I became a cartographer to make maps that change the world for the better. Right now, no situation needs this kind of map more than the current drama unfolding around the Dakota Access Oil Pipeline’s crossing of the Missouri River.

Thousands of Native Americans and their allies have gathered on former Sioux land delimited by the 1851 Treaty of Fort Laramie to try and stand in the way of the “black snake” that could poison the Standing Rock Reservation’s water supply. Many have noted that the pipeline corridor was repositioned from its original route north of Bismarck after the U.S. Army Corps of Engineers rejected it citing its threat to drinking water in that mostly-white municipality. Yet the Corps failed its federal mandate for meaningful consultation with the Standing Rock Tribe before signing off on a route that moved the pipeline to their doorstep.

This is not to say that the good citizens of Bismarck and Mandan should be subjected to the risk of an oil spill. What’s wrong with the picture above isn’t the routing of the pipeline. What’s wrong is that the pipeline project exists to begin with. Some say it’s a good alternative to dangerous oil-by-rail shipments of Bakken crude. Those are bad too. We don’t need more fossil fuels making it to market to be burned and burn the planet up in turn (I am typing this in Wisconsin as the temperature nears 70 on the first of November). We do all need clean water. As the Sioux say, mni wiconi–water is life.

To keep to its construction schedule, the pipeline company, Energy Transfer Partners, has met nonviolent water protectors with private security guards using attack dogs in a scene reminiscent of 1963 Birmingham. It has worked hand-in-glove with law enforcement and the National Guard to create a militarized response straight out of apartheid South Africa or occupied Ireland. It has locked up hundreds of protesters in wire cages like those used early on at Guantanamo Bay. Those on the ground fear something like another Kent State, yet they keep coming, and the worldwide solidarity has gone viral.

dscn3261
Water protectors approach a line of riot police and armored vehicles on October 15. Photo by the author.

Yet for all that, when I went out to camp with the water protectors at Oceti Sakowin on October 13, I had to rely on a friend’s hand-drawn sketch posted to Facebook for directions to the camp. If you Google “NoDAPL map,” you’ll find few maps available to provide visual context for the unfolding drama. The most popular seems to be the company’s own very-small-scale route map, showing a dotted line over highlighted counties on a generic road map backdrop.

dapl-map-full
Dakota Access Pipeline Route Map by Energy Transfer Partners

This kind of view erases the people affected by the pipeline–quite literally, by covering over their communities with a hot pink gradient fill. It doesn’t tell you that all of Turtle Island (North America) is Indian Country, or that the project runs headlong into international treaties signed between the U.S. and various tribes and then unilaterally violated by Congress. It doesn’t show you where the frontline communities have set up camp to fight back (and here I realize that I should also make a map of the Bold Iowa resistance camp), or where the pipeline company, spurred on by the internal pressure of their $3.8 billion investment, has bulldozed sacred ground, or where exactly a pipeline break would endanger the drinking water of millions downstream.

There was one other, better map of the project that I found and was partially inspired by–a relatively simple yet powerful map by Jordan Engle published by The Decolonial Atlas. It uses the indigenous placenames for key waterways and sites in the vicinity of the Sacred Stones Camp (translations are on the blog post linked to above). It is oriented to the south, challenging the typical viewpoint of Western maps. This map has truly not gotten the attention it deserves.

decolonial_atlas_map
Dakota Access Pipeline Indigenous Protest Map by Jordan Engle and Dakota Wind, The Decolonial Atlas.

Maps like this are great, and there should be more of them. However, I felt strongly that there still needed to be a map of the area that would look familiar to most viewers and orient them to the important geographic facts of the struggle. I don’t claim that none of those facts are currently in dispute, but I recognize that all maps (even road maps overlaid with pink polygons) take a position and create knowledge based on the cartographer’s point of view. Maps have great power, and it’s a power anyone with pen and paper or a computer can wield.

My Wisconsin-bred geographer hero Zoltan Grossman once declared, “The side with the best maps wins.” The pipeline company has an army backed by state power to do its bidding. The water has its scrappy protectors. It’s time we put the latter on the map.

To download a large-scale printable version of the map, click here.

This post and the map have been corrected to indicate that the 1851 Treaty territory is former Sioux territory, not unceded land as originally stated. In fact, this land was ceded by the Sioux to the U.S. under the terms of Article 2 of the 1868 Treaty of Fort Laramie. While the latter treaty was signed by a plurality of Sioux chiefs, some chiefs did not sign because they refused to agree to any land cessions, and at least one of those who did sign (Red Cloud) later claimed he was misled regarding these treaty terms and believed he was merely signing a treaty of peace. See Edward Lazarus’s excellent book Black Hills, White Justice for more of this history.

The post has also been updated to remove reference to citizen protests against the original route in Bismarck, which I have not found a first-hand record of.

WebGIS is Fun and So Can You

I’ve written this post to accompany the talk I gave on August 31 to the UW Cartography Lab’s Education Series special two-day workshop in partnership with Mapbox. I was asked to talk about JavaScript and Turf.js. Give my mixed audience, I thought talking about Turf right away would be putting the cart a bit before the horse—first, I needed to build a simple web GIS app that could use Turf. So this 1-hour talk turned into an as-noob-friendly-as-possible walkthrough of building such an app.

To start, here are the slides I used; really, just an outline of my talk.

The link on the second slide goes to a dropboxed zip file containing two directories: one called “initial” and another called “final.” The “initial” directory contains a boilerplate HTML file that I used to begin my live app-building demo and a data directory with two zip files. The “final” directory contains the final app script and data files. I’ll be walking through the “final” version, but you can start with “initial” and try to build it out yourself for practice.

Slide 3 is what I called the “Sam Matthews Mantra.” Sam Matthews is this awesome guy who I used to work side-by-side with in the Cart Lab back in 2012 and now works at Mapbox; his visit to Madison was the original impetus for this “reunion” event. Yesterday, he gave a talk on the basic structure of slippy maps, including the four ingredients: tiles, library, data, and internet. But for the purposes of my tutorial, I modified this mantra a bit (slide 4) to outline the parts of my demo: library, tiles, data, and analysis.

To create my app, I first needed to load some helper libraries. The app uses jQuery to interface with the DOM (Document Object Model, the structure of a website) and facilitate asynchronous communication between the browser and server. For the actual mapping, I used Leaflet, now the most popular open-source JavaScript library for creating web maps. The spatial analysis components that make this a web GIS app uses Turf.js, but I’ll get into that library more later. I call the libraries from remotely hosted sources using some simple script tags in the <body> section of index.html:

JS-Turf-1

Leaflet also requires its own stylesheet, linked in the header:

JS-Turf-2

With Leaflet imported, we can build a basic map with the following JavaScript code within the $(document).ready callback function (this bit of jQuery causes the browser to wait until the entire page is loaded before executing the JavaScript):

JS-Turf-3

Awesome—I now have a Leaflet map; you can tell by the zoom button in the upper-left corner. Note that I am running the app through a Localhost server, rather than double-clicking on the index.html file. Some browsers (Chrome especially) strongly dislike loading data outside of a server, so it’s always best to set one up first. If you don’t want to go through the rigmarole to set up something permanent on your machine, a great temporary solution is to run your app through a preprocessor app such as prepros or codekit.

JS-Turf-4

(The text and “Click me!” button were included in the “initial.html” template).

It’s not really much of a slippy map, though, without some map tiles. Nowadays, I like to choose tilesets for my Leaflet apps that are included in the Leaflet Providers Preview. Leaflet-providers is a small plug-in for Leaflet that lets you use a shorthand tileset name instead of a full tileset URL to create a Leaflet tile layer, but the Preview site is itself a handy tool that gives you the full code for creating the tile layer if you don’t want to download the plug-in. For my purposes in this demo, I just copied and pasted the code for the Stamen Watercolor tileset:

JS-Turf-5

A little explanation of the above: L.tileLayer is a method of Leaflet that creates a tile layer. It takes two parameters: a URL with variables (letters inside curly braces) that are replaced by the library automatically depending on which tiles are called from the server, and an options object with any of several options for the tile layer. The addTo method then adds the new tile layer to the Leaflet map object, contained by the map variable.

With tiles loading, it’s time for data! For the demo, I chose to use two datasets from Natural Earth, the go-to website for popular global datasets covering cultural, physical/natural, and raster themes. The two datasets I chose were States and Provinces and Populated Places, both at the 1:50 million (medium) scale (unbeknownst to either of us before hand, John Czaplewski, who presented after me, chose the exact same datasets for his live PostGIS demo). To prevent any snafus with the Natural Earth site (it was being slow when I tested it the night before), I went ahead and included these two shapefiles in the “initial” demo folder, each in its own zip archive (“states.zip” and “places.zip”).

If you’ve ever worked with GIS software, you probably know what a shapefile is. Really, it’s a collection of several different files with different components of a geospatial dataset (the .shp component contains the geometry data, while other components contain attribute data, metadata, etc.). Shapefiles are a popular standard format for GIS data (though not usually the best one for spatial analysis tasks), but they’re pretty useless for web apps. Instead, the standard geospatial data format for the web has become the GeoJSON format.

A handy tool for converting shapefiles to GeoJSONs is mapshaper, created by UW alumnus and New York Times graphics wiz Matt Bloch. We can import a shapefile to mapshaper by just dragging the .shp part of the file and dropping it on the site, but that will result in only the geometries being converted to GeoJSON without any attributes. We want the attributes, so we need the whole shapefile; luckily, mapshaper lets us import a zip file containing it. Once we’ve uploaded a shapefile, mapshaper should display something like this:

js-turf-6
The populated places shapefile in mapshaper

Mapshaper is a great little program. It allows you to quickly and easily simplify polygon geometries, reducing the file size. In this case, though, all we want it to do is spit the data back out as a GeoJSON. For this, we click on “Export” in the upper-right corner, then choose “GeoJSON” and hit “Export” again. This should cause a .json file to download. To make accessing the data easier, I renamed each file “places.geojson” and “states.geojson,” respectively.

Now, just what is a GeoJSON? It’s a geospatial variant of JSON, which stands for “JavaScript Object Notation.” Essentially, it’s a more picky formatting of a JavaScript “object,” which is really not an object in the true object-oriented programming sense but rather a type of map or dictionary data container. To see what this looks like, neatly formatted, we can import our new GeoJSON data into another handy little web app called geojson.io, an open-source project largely created by Mapbox’s Tom MacWright. Here is our populated places file displayed in it:

js-turf-7

On the right-hand side of the window, you can see the object structure of the file, which consists of nested key-value pairs. Every GeoJSON has a "type" which is always "FeatureCollection", and every one always has a "features" property consisting of an array of features. This will become important when we use Turf.js to operate on the data. Each feature in turn has a "type", which is always "feature", a "geometry" which is an object containing the feature geometry as one or more geographic coordinate pairs (always in the WGS 84 coordinate system), and a "properties" object consisting of the attributes, if any. Note that this is similar to a shapefile in that it doesn’t encode any relationships between features, or topology in GIS speak (there is another web spatial data format, TopoJSON, which does encode topology, but we won’t get into that in this tutorial).

Now that we have our data in the right format, we need to load it into our code and onto the map. Loading data into a JavaScript program is trickier than it sounds, but it’s worth taking the time to show you how to do it right. Some tutorials out there will tell you to just assign a variable to the code in each JSON file and bring it into the site with HTML <script> tags, but if you’re loading geographic datasets (which tend to be large), this has a tendency to bog down the loading of your page. It’s much better to load the data asynchronously, adding it to the page after it has loaded into the script. But this means that the rest of your script will have executed before your data is loaded. Thus, you need a special function called an AJAX callback to make use of your data only after it has been loaded by the browser.

First, we need to make sure our .geojson files are stored in the “data” folder of our working directory. Then, we can use one of jQuery’s many helpful AJAX methods to load the data into our script. Because we have two datasets we need to load, it’s best to load them in parallel (at the same time) and only call the function that uses the data (the callback) after both files have loaded. To do this with jQuery, we can use the $.when method:

js-turf-8

Note that the two $.getJSON methods are actually parameters of the $.when method, so there should be a comma between them, and no semicolons in between or after them (I ran into trouble with this both in practicing for the demo and doing it). Each one of these .getJSON methods calls a data file and then executes a separate callback for that file, which saves the file’s data to a property of an object I created previously (data). Finally, the .then method calls the overall callback function after the data has loaded, which I’ve named addData. Now, I’ve put a few carts before the horse here; let’s back up and take a look and where I define the data object and the addData function, above the AJAX call:

js-turf-9

Again, this code is above the $.when method in the script. Here, I’m first defining two objects: data, which (as we have already seen) will hold the GeoJSON data, and dataLayers, which will hold Leaflet’s rendering of that data into layer objects that can go on our map.

Then I define the callback function, addData. By the time this function executes, the $.getJSON callbacks have already saved each file’s GeoJSON data to properties of the data object, so I can go ahead and take a look at the structure of that object in the console and see that my data is indeed present:

js-turf-10
GeoJSON objects neatly formatted in the Firebug console

Now that I have this data, I can use Leaflet’s L.geoJson method to stick each layer on the map. This method takes two parameters: the data I want to turn into a map layer, and an options object that can hold a number of different layer options. For the states layer, I’ve given it some style options to override Leaflet’s defaults. For the places layer, I’m using the pointToLayer option to create a function that iterates over each point feature and turns it into a Leaflet circleMarker, which I have styled to look like a moderately-sized black dot. Each L.geoJson method is chained to the .addTo(map) method to add it to the map, and the resulting layer object is assigned to a property of the dataLayers object I created above the addData function, allowing for later access.

Here is what my map now looks like:

js-turf-11

 

With data on the map, we are ready for the fourth and final step, which makes this a true WebGIS: analysis. Now, as you can see in my HTML and the image above, I have included a large “Click me!” button in the boilerplate for the app. A good WebGIS should be interactive; you want to allow your users to perform operations on the data, not just do what you think they want to do for them. Since this is a simplified demo, I figured I would just include one button instead of several to demonstrate the concept. At the end of the tutorial, each click of the button will do something different and interesting to the data.

Before we get there, to keep our code neat and make sure the analysis only gets performed after the data is loaded, we need a new function called from within addData to put our analysis tasks in. I’ve called this function analyze, and pass it the two objects I created, data and dataLayers. If you’re working from the “initial” index.html file, you will want to move the $('#mybutton').click listener and clickme callback function inside of this analyze function. Inside analyze, we will perform three types of analysis using Turf.js: a point-in-polygon test, creation of a bounding box, and creation of a triangulated irregular network (TIN). To have our button do each of these in turn, we will create a counter and increment it each time the button is clicked, calling a different analysis function for each counter value.

js-turf-12

Before we go further, the thing to know about Turf is that its methods operate very much like toolboxes in ArcGIS: you put one or more layers in and you get a new layer out. The big difference is that in this case, each layer is in GeoJSON format, either an individual feature or an entire FeatureCollection. Turf includes dozens of helpful analysis tools that can all be run client-side in the browser. This gets an A+ for convenience and interactivity, but a C or D for performance. If you’re using big data or have a complex series of tasks to run, fire up Arc or Q or Python and skip the JavaScript.

Now, let’s create our point-in-polygon function. A point-in-polygon test is a classic problem in computational geometry and has all sorts of applications in GIS. Turf’s .within method accomplishes this test. It takes two parameters—a set of points and a set of polygons—and returns a new FeatureCollection containing just the points that are within the polygons. So, say we want to find the populated places within the U.S. lower 48 states. Since our states dataset has states and provinces for other countries as well, we will first have to pick out only U.S. states that aren’t Alaska or Hawaii and add them to the features array of a new FeatureCollection. We can do this with a piece of native JavaScript, a .forEach loop:

js-turf-13

Now that we have our subset of states—stored in the usStates variable—we can use it to perform our point-in-polygon test, and view the results in the console:

js-turf-14

js-turf-15

There are 94 populated places within the U.S. Lower 48 out of our original dataset. To put these on the map, we can simply create a new L.geoJson layer (this time with red dots) and add it to the map. We can also replace the places component of our data object with the new dataset, so that our next two analysis operations are only operating on the U.S. places.

js-turf-16

Now when we click on the “Click me!” button, we should see this result:

js-turf-17

That was actually the hardest Turf analysis I got to in the demo. I wanted to get the tough one out of the way first, I guess. The next two are much more simple. First, the bounding box:

JS-Turf-18.png

This uses Turf’s .envelope method to return a polygon encompassing all vertices. Once again, it makes a call to L.geoJson to plunk the bounding box onto the map. Voilá:

js-turf-19

Finally, we’ll create a TIN using our U.S. Places as the input dataset. Turf’s .tin method takes the dataset and optionally the name of an attribute that can be used as a z value for each vertex. This results in polygons that have three properties: a, b, and c, the z values. We can use this data to shade the triangles; in this case, I chose to calculate the averages of the three values and use each polygon’s average to derive its percentage of the highest average z value in the dataset. I then set this percentage as the opacity of the polygon to make the data visible.

js-turf-20

Here is the result (after three button clicks):

js-turf-20

That’s about it for this demo. Of course, there are lots of ways to use Turf that don’t involve Leaflet; since it speaks GeoJSON, it’s compatible with a wide variety of other libraries and frameworks. Hopefully this has been a useful intro to open source WebGIS tools and inspires you to go do something cool.

Carbon Emergency Infrastructures

The following post contains the transcript and images from a talk I gave at the UW-Madison Geography Symposium a couple days ago. Since I wrote the whole thing out, I thought I would go ahead and share it here.


 

The Carbon Pollution Emergency Act of 2022 has been heralded by historians as the first bold step against global warming taken in the United States. It implemented a heavy carbon tax on all fossil fuels and progressively restricting the amounts of coal, oil, and natural gas that could be produced or imported each year. The Act made it official federal policy to reach 100% renewable electricity generation by 2050. Proceeds of the carbon tax and a reduction in military spending were used provide 90% rebates on small-scale solar and wind energy systems for homes and fund the replacement of fossil fuel electricity plants with wind and solar farms. In the transportation sector, the Act placed a moratorium on the building of new roads and airports and boost funding for mass transit systems by over 1,000%…

 Such a scenario might seem far-fetched today. But not much is invented without imagining it first. Think of all the tablets and cell phones we have now, even 3D printers—all technologies that were dreamed up on Star Trek in the sixties. If we can dream it, we can do it. And I’ve been dreaming. And since I’m a cartographer, my dreams look like maps. The focal point of my dreams up to this point has mostly been the transportation sector, since it’s big, it’s visible, and it entails nifty-looking machines that go “vroom!”

caltrain
Vroom!

I was inspired to share some of my dreams with you by some conversations during the CHE Symposium about the concept of infrastructure, and the question of whether nature can be conceived of as infrastructure. While I don’t see nature as a form of infrastructure, infrastructure does have a large role in shaping nature. This is particularly true of transportation infrastructure, as it has reshaped much of this country’s landscape and is our second-largest source of greenhouse gas emissions, after electricity. So come along and dream with me for the next few minutes about what a transportation future with a lighter carbon footprint could look like.

I want to start here:

Cuba_Hwy
Take a long walk off a short bridge?

This is the main east-west highway in Cuba. When I visited Cuba in 2005, we traveled part of this highway in a beat-up old school bus. We passed a number of unfinished bridges like this one, along with interchanges with dirt exit ramps leading to nowhere. After the Soviet Union collapsed, Cuba underwent a Carbon Emergency, what they call the Special Period. Overnight, they no longer had an overpaying buyer of their sugar and tobacco exports, so they no longer had money to import fossil fuels. Driving suddenly became very expensive. What you unfortunately can’t see in this picture, and I couldn’t find a good picture of, is all of the bicyclists, pack animals, and hitchhikers that I witnessed making use of the four-lane expressway. While Cuba is slowly being reintegrated into the fossil fuel economy, it still serves as a model for what a post-carbon or largely post-carbon society could look like. And it’s not that bad. The point I want to make here is that this highway is not abandoned, just repurposed. It changed my way of thinking about post-carbon infrastructures. New stuff takes more energy to build, and at least in the near term, that necessarily means it needs more fossil fuels and other non-renewable resources. We shouldn’t necessarily be thinking of how to build shiny new things, but rather how to repurpose the immense and under-cared-for infrastructures we already have to utilize them without fossil fuels.

MadisonRapidTransit_maponly

The first cartographic imaginary I created along these lines involved a commuter rail system for Madison. I designed this map over the summer of 2014. Railroads were the first nationwide transportation infrastructure designed to move people and goods quickly over land, so this would be a renewal of an old idea rather than a new idea. At least 60 percent of this proposed urban rail transit network would use existing railroad rights of way.

MadisonRapidTransit2

Conducting a bit of GIS analysis, I found that this system would put about 21% of the Dane County population and 47% of the county’s jobs within a half-mile of a station. The system-wide use would be higher still when considering the network of park-and-rides and bus transfer points that would allow it to interface with other forms of surface transportation.

This brings up an important point: the goal of a carbon emergency infrastructure cannot simply be to replace fossil fuel infrastructures, as this is not realistic in the near term. It must interface with them and make it convenient for humans to shift their habits away from heavily consumptive transportation. Imagine how much quicker and easier it would be to take a train from home in Schenk-Atwood or South Park Street to campus than drive a car or cram onto a bus. Young people, old people, and anyone else without a driver’s license would have more freedom to move and more jobs accessible to them. Mass transit is a racial justice issue as well, as those in the black community are less likely to have driver’s licenses than average due to poverty and institutional discrimination. Economists who think about mass transit say that it creates “positive externalities,” or virtuous feedback loops that benefit society at large. One of these is the “Mohring Effect,” the observation that better mass transit service creates more demand, which in turn increases the frequency of service, reducing travel times, creating more demand, and so on.

After completing the urban commuter map, I began to wonder how I might extend a vision for better mass transit outward to rural Wisconsin. Where I went to college, up north, we were fortunate enough to have a regional bus system with one route that ran every two hours on weekdays. This is to say, it was a valiant effort with the very limited funding available, but not very useful to your average commuter. Most parts of the state don’t even have that. I first thought of rebuilding the railroads, as per my Madison transit idea. But the extensive railroad network that used to exist in Wisconsin has largely gone to pot and would cost billions to rebuild. Why not pick up some lower-hanging fruit?

WisconsinBusRoutes
Potential bus routes in northern Wisconsin

During trips out to the Olympic Peninsula of Washington State, I used their regional bus system, which has more frequent service and quite effectively connects isolated towns across multiple counties. I thought, Wisconsin can do that, and better. So for the past year, in my very limited spare time, I have been working on concocting a transit map with the premise of bus routes with hourly service on every federal and state highway in Wisconsin. This infrastructure uses the road base that already exists, but reimagines it as a more efficient and less deadly people-carrying network. Rides would be pooled onto fast, clean electric buses with professional, sober drivers. Regular bus service, especially in the evenings, would reduce the epidemic of drunk driving in rural Wisconsin, where every burgh has a bar or two and driving is currently the only way to get home. The roads themselves need less maintenance, as fewer cars and trucks create less wear and tear on the pavement. The bus system brings freedom of movement and opportunities for breathing new economic life into the impoverished countryside.

 

proterra-bus
Get on the bus!

The primary investment needed would be in the moving parts of the infrastructure, the buses. These could be made all-electric using newer battery and fuel cell technologies, or run on cleaner forms of biodiesel, or some combination. They could largely be made out of recycled metals and plastics from decommissioned war machines and old pop bottles and grocery bags. Some nonrenewable resources would still be required.

beach-1170

DevilsLake
Let’s go play in the park!

Mass transit need not only be used for commuting to and from employment. Recreation has a place in our imaginaries too. And like access to jobs and other privileges that come with mobility, access to recreational and relaxation opportunities can and should be enhanced by mass transit. With a relatively small grant, the City of Madison could begin offering round-trip service to nearby state parks on summer weekends, making these oases accessible to young people and low income folks who can’t afford the gas and the park entry fee.

npbus
Passengers board a shuttle bus in Zion National Park. National parks provide shuttle buses to alleviate automobile congestion and reduce air pollution.

 

In this imaginary, city buses not in use during the more limited weekend service routes would be driven by Metro drivers who want to earn overtime pay while enjoying some time away from the city themselves. The buses begin from downtown, stopping at outlying transfer points for greater convenience, and spend the day traveling to and around the recreation area before returning home in the evening. From Madison, Parkbus routes could service a number of parks and recreation areas within an hour’s drive on different weekends throughout the summer, including Devil’s Lake, Blue Mounds, Governor Dodge, Kettle Moraine State Forest, and Wisconsin Dells. Service to the same area on both Saturday and Sunday facilitates overnight campouts. Urban citizens have the opportunity to relax and rejuvenate in nature without having to drive there. The most crowded parks, like Devil’s Lake and Governor Dodge, no longer have to contend with paving over more of their land as parking lots, and their air is cleaner too, all because urbanites have the option of taking the bus instead of driving.

IBX

To close, I want to take us back home to Madison and consider the one form of wheeled transportation that is closest to being carbon-neutral. That, of course, is biking. Madison is already a great place to commute by bicycle. It is currently ranked the 7th-most bike friendly city in the U.S. by Bicycling Magazine. On the other hand, we’re 7th, just behind hill-infested San Francisco! Madison can do better!

Tourism_MononaTerrace
Cyclists ride past Monona Terrace on the Capitol City Bike Path. Image from Shifting Gears, Wisconsin Historical Museum

One way to encourage more bike commuting is to improve existing commuter routes while reducing the convenience of driving. The most heavily used bike corridor in Madison, the Capitol City Path across the Isthmus, sees close to a thousand cyclists a day on average during peak season. But it requires frequent stops for cross-traffic on very minor streets, as well as tedious and dangerous crossings of two of the city’s busiest intersections. To get to campus that way, you have to go out of the way along Monona Bay before turning north. On my own commute to campus from the East Side, rather than deal with this detour, I ride Gorham and Johnson streets, which have very heavy car traffic and are downright treacherous in winter. But what if the existing bike corridor were re-envisioned as a “no-stop zone” for bikes, the nation’s first bicycle expressway? Think about the new raised pedestrian crossing on Park Street at the end of Library Mall. Why can’t such crossings be added to the existing bike path to give cyclists a smoother ride? On local streets, cars should be made to stop for bikes instead of the other way around. A cut-through bike path could be constructed alongside the train tracks that cut the corner from Broom Street to West main.

IBX_south_bridgeIBX_north_bridge

IBX_wilson_st

Two new bicycle overpasses would carry cyclists quickly and safely across John Nolan Drive at North Shore and across Williamson Street at the Blair/John Nolan intersection, as well as a reconfiguration of the current “bike boulevard” along East Wilson Street to put bikes in a partitioned express track.

 

First-Settlement-Park-Overview
Plans by the Madison Design Professionals Workgroup for a covered-over Blair Street/John Nolan Drive

It turns out I am not the only person thinking about such things. The Madison Design Professionals Workgroup recently put forward a proposal to cover up John Nolan Drive to improve local pedestrian, bike, and rail connections between downtown and the Isthmus. Their plan is driven more by aesthetics than sustainability and would use a lot more nonrenewable resources, but does incorporate bicycle and commuter rail components. It could easily include my vision for a bike expressway (and if the designers are smart, I think they will). Of course, this plan is much more thought out than mine, and by people who actually get paid to do this stuff. To date, all of my doodling has been stuff I’ve daydreamed in my spare time. But who knows? Even daydreams can sometimes make their way into the world.

Printing in Leaflet

It’s been a while since I posted anything to this blog, but that doesn’t mean I haven’t been busy. I’ve been having all kinds of adventures working with Leaflet and making it do interesting things it probably wasn’t intended for. I’ll try to catch up with writing about some of these over the next few posts.

My most recent triumph involves printing a Leaflet map. Now, I know what you’re going to say: Why would you print a Leaflet map? Aside from the snarky answer why not?, the project I’m working on requires a map that is both interactive and can go where there are no mobile devices or internet access, and that requires printing.

Before I get into the technical stuff, I want to briefly expound on the broader implications of what I’m about to cover. We in the cartography world are generally split down the middle when it comes to media: either you make static maps for print or you make interactive web maps. Generally, the only crossovers are static maps that get plopped online Web 1.0-style, as images or PDFs. I think it’s high time we start thinking about transcending these media silos with our maps. Like, can you print an SVG graphic generated by D3? Sure you can, but can you control the scale at which it prints and make it look good? Similarly, how do we make zoomable, panable slippy maps, with all the advantages those entail for web users, and make them printable as a resource for those who need to draw on top of them or pick them up and take them where reliable internet access doesn’t exist?

The specific map I’ve been working on for a little over a year now is wikimap of data collected in eastern Senegal, which will enable trusted users with local knowledge to edit the data and contribute new data. One of the requirements of the application is that it be printable as posters to take to village meetings. The map utilizes an underlying satellite imagery tileset, a custom tileset with the polygon and line data (hosted with Tilestrata on Amazon AWS, which is a whole other blog post waiting to happen), and the point data as overlays added with your typical L.geoJson calls.

Screenshot1.png
Couloirs Transhumance,
a map of herding routes in eastern Senegal

One challenge is that the map covers such a huge geographic area and includes so much data that a printed version of the entire thing would be unintelligible unless printed on a very large poster. Thus, users need to be able to choose both the scale of the map they print and the paper size, and they need to be able to preview what they’re going to print. So I built a print preview window.

Screenshot2_crop
Teh Printerface!

First of all, notice there are no satellite image tiles on the map. Satellite images are basically photos (but from spaaaaaaaace). Have you ever tried printing a photo at 72 dpi? It looks. like. crap. Likewise, raster tilesets look like crap when printed. Ditch ’em.

 

But I still wanted to take advantage of Leaflet’s smooth interaction capabilities to allow the user to control the map view that they’re going to print. Thus, I created a new Leaflet map with no base layer and L.geoJson overlays for all of the mapped data, including the two-dimensional features that are burned into my custom tileset on the main map. When there are thousands of SVG paths and raster icons on the map, it slows things down a bit. So I had to kill scroll wheel zoom and get the zoom buttons off the map anyway, since they’re not going to be present on the final printout. Hence the scale bar, which represents the Leaflet zoom levels as tics on a line to give users an idea of how close they are to the minimum or maximum zoom.

You’ll notice that under the scale bar is an actual, honest-to-godess ratio scale, which applies to the printed map. Okay, so there’s like, A LOT of math behind this, because the map scale varies based on latitude, zoom level of the map, page size, and the size and shape of the preview window. Here’s the code:

function adjustScale(){
    //change symbol sizes and ratio scale according to paper size
    var prevWidth = $("#printPreview").width();
    var prevHeight = $("#printPreview").height();
    var longside = getLongside();
         
    //find the mm per pixel ratio
    var mmppPaper = prevWidth > prevHeight ? longside / prevWidth : longside / prevHeight;
    var mapZoom = printPreviewMap.getZoom();
    var scaleText = $("#printBox .leaflet-control-scale-line").html().split(" ");
    var multiplier = scaleText[1] == "km" ? 1000000 : 1000;
    var scalemm = Number(scaleText[0]) * multiplier;
    var scalepx = Number($("#printBox .leaflet-control-scale-line").width());
    var mmppMap = scalemm / scalepx;
    var denominator = Math.round(mmppMap / mmppPaper);
    $("#ratioScale span").text(denominator);
    $("#previewLoading").hide();
    return [mmppMap, mmppPaper];
};

function getLongside(){
    //get longside in mm minus print margins
    var size = $("#paperSize select option:selected").val();
    var series = size[0];
    var pScale = Number(size[1]);
    var longside;
    if (series == "A"){ //equations for long side lengths in mm, minus 10mm print margins
        longside = Math.floor(1000/(Math.pow(2,(2*pScale-1)/4)) + 0.2) - 20;
    } else if (series == "B"){
        longside = Math.floor(1000/(Math.pow(2,(pScale-1)/2)) + 0.2) - 20;
    };
    return longside;
};

What the printerface does is give the user access to all of these variables, and change the map scale depending on them. Fortunately, international paper sizes greatly simplify this math by maintaining the same aspect ratio (√2) regardless of size. If the window size changes, the preview map can change size proportionally and still represent the printed page. In order to better represent what the printout will look like, the ratios allow for automatically adjusting the size of the symbols on the map based on the window size or chosen paper size. So if the user changes the paper size to, say, A1 (a typical poster size), the preview map looks like this:

Screenshot3_crop.png

Note that the ratio scale has increased quite a bit. Think about what this will look like when it turns into a 841 mm x 594 mm poster. The bounding box has been preserved, symbols will be the same proportions relative to each other as in the preview, and they will be the same absolute size as the symbols printed on any other page size (8 mm wide for the icons). Also note the new labels for village features. These are scripted to show up whenever the scale is greater than 1:250000. More on these in a minute.

The last tricky step to printing is how to actually resize everything so everything on the map prints at the right size with the correct bounding box. Folks, I’m here to tell you, figuring this out was no walk in the park. I may have prematurely lost some hair over it. In the end, the solution was as simple yet un-straightforward as the cheat that lets you beat Myst within the first five minutes (for you whipper-snappers, that’s a shameless 90’s computer game reference). Here’s the code in case you want to pick through it; if you just want the punch line, skip on down.

$("#printButton").click(function(){    

    //transform map pane
    var mapTransform = $("#printPreview .leaflet-map-pane").css("transform"); //get the current transform matrix
    var mmpp = adjustScale(); //get mm per css-pixel
    var multiplier = mmpp[1] * 3.7795; //multiply paper mm per css-pixel by css-pixels per mm to get zoom ratio
    var mapTransform2 = mapTransform + " scale("+ multiplier +")"; //add the scale transform
    $("#printPreview .leaflet-map-pane").css("transform", mapTransform2); //set new transformation

    //set new transform origin to capture panning
    var tfMatrix = mapTransform.split("(")[1].split(")")[0].split(", ");
    var toX = 0 - tfMatrix[4],
        toY = 0 - tfMatrix[5];
    $("#printPreview .leaflet-map-pane").css("transform-origin", toX + "px " + toY + "px");

    //determine which is long side of paper
    var sdim, ldim;
    if ($("#paperOrientation option[value=portrait]").prop("selected")){
        sdim = "width";
        ldim = "height";
    } else {
        sdim = "height";
        ldim = "width";
    };

    //store prior dimensions for reset
    var previewWidth = $("#printPreview").css("width"),
        previewHeight = $("#printPreview").css("height")

    //set the page dimensions for print
    var paperLongside = getLongside(); //paper length in mm minus 20mm total print margins minus border
    $("#printPreview").css(ldim, paperLongside + "mm");
    $("#printPreview").css(sdim, paperLongside/Math.sqrt(2) + "mm");
    $("#container").css("height", $("#printPreview").css("height"));
    
    //adjust the scale bar
    var scaleWidth = parseFloat($("#printBox .leaflet-control-scale-line").css('width').split('px')[0]);
    $("#printBox .leaflet-control-scale-line").css('width', String(scaleWidth * multiplier * 1.1) + "px");
    $("#printBox .leaflet-control-scale").css({
        'margin-bottom': String(5 * multiplier * 1.1) + "px",
        'margin-left': String(5 * multiplier * 1.1) + "px"
    });

    //adjust north arrow
    var arrowWidth = parseFloat($(".northArrow img").css('width').split("px")[0]),
        arrowMargin = parseFloat($(".northArrow").css('margin-top').split("px")[0]);
    $(".northArrow img").css({
        width: String(arrowWidth * multiplier * 1.1) + "px",
        height: String(arrowWidth * multiplier * 1.1) + "px"
    });
    $(".northArrow").css({
        "margin-right": String(arrowMargin * multiplier * 1.1),
        "margin-top": String(arrowMargin * multiplier * 1.1)
    });

    //print
    window.print();

    //reset print preview
    $("#printPreview .leaflet-map-pane").css("transform", mapTransform); //reset to original matrix transform
    $("#printPreview").css({
        width: previewWidth,
        height: previewHeight
    });
    //reset scale bar
    $("#printBox .leaflet-control-scale-line").css('width', scaleWidth+"px");
    $("#printBox .leaflet-control-scale").css({
        'margin-bottom': "",
        'margin-left': ""
    });
    //reset north arrow
    $(".northArrow img").css({
        width: arrowWidth + "px",
        height: arrowWidth + "px"
    });
    $(".northArrow").css({
        "margin-right": arrowMargin,
        "margin-top": arrowMargin
    });
});

Okay, here’s the punchline. The key is using a CSS transform to temporarily scale up the whole leaflet-map-pane div, which holds all of the layers in the print preview map. Leaflet already adjusts the symbol positions using a transform translation, and I need to preserve that transform to reset the map after it’s printed. But to print it, I need to add a scale transform that multiplies the size of everything by the ratio of paper millimeters per screen millimeter (which you get if you cross-multiply paper mm per pixel times pixels per screen mm). Once I figured this out, I had to figure out how to adjust the transform origin so the bounding box didn’t move out from under my paper map. This involved dissecting the transform matrix and turning the last two numbers negative as the x and y coordinates of the transform origin, which moves the whole map back up and to the left, where it should be (I still can’t keep straight why this works even after re-reading the above link, but I’m sure you mathy people can figure it out).

The rest of the above code just messes with the map div and the accessory elements to get them all the right print size, pulls the trigger, then sets everything back right for the screen viewer. Oh, I also have some helpful print CSS styles, which I’m not going to bother explaining:

@media print {
    @page {
        size: auto;
        margin: 10mm;
    }

    body {
        /*border: 5px solid blue;*/
    }

    #container {
        /*border: 4px solid green;*/
        position: absolute;
    }

    #cover, #maparea, #printOptions, #ppmmtest, .closeDialog, .resize, .msg_qq {
        display: none !important;
    }

    #printBox, #printPreview {
        position: absolute;
        bottom: 0;
        left: 0;
        top: 0;
        right: 0;
        /*border: 1px solid red;*/
    }

    #printPreview {
        border: 1px solid black !important;
        background-color: white !important;
    }

    #printPreview span {
        text-shadow: -1px -1px 0 #FFF, 1px -1px 0 #FFF, -1px 1px 0 #FFF, 1px 1px 0 #FFF;
    }

    .leaflet-control-scale-line {
        text-align: center;
    }
}

So yeah, that’s printing from Leaflet in a nutshell. Just to prove I’m not blowing smoke, here’s a scan of the printed version of the second screenshot of the post.

scan.png

Pretty good, huh?

Now, I promised you I would talk about labels. Leaflet and labels are like Cowboy and Octopus. After all, why would you need to put labels on a Leaflet map when they’re baked into your tiles? Well, again we come to this minor issue of printed raster tiles looking like something a dung beetle would enjoy. So I needed to figure out how to plunk labels onto my map. For the area features, which are drawn as SVG overlays, I decided the easiest thing would be to just add the labels as SVG <text> elements, placing them in the center of each feature’s bounding box. Unfortunately, I’ve found that once Leaflet draws its overlays, you can’t just put new elements into the SVG and expect them to render. So I cheated a little and brought in D3 to do the job. Because D3 is magic.

//for SVG polygons, add label to center of polygon
var g = d3.selectAll('#printPreview g'),
    scaleVals = adjustScale(),
    denominator = Math.round(scaleVals[0]/scaleVals[1]),
    areaTextSize = denominator > 500000 ? '0' : String(8/scaleVals[1]),
    labelDivBounds = [];

g.each(function(){
    var gEl = d3.select(this),
        path = gEl.select('path');
    if (path.attr('class').indexOf('|') > -1){
        var labeltext = path.attr('class').split('|')[0],
            bbox = path.node().getBBox(),
            x = bbox.x + bbox.width/2,
            y = bbox.y + bbox.height/2,
            color = path.attr('stroke'),
            textSize = x == 0 && y == 0 ? 0 : areaTextSize,
            text = gEl.append('text')
            .attr({
                x: x,
                y: y,
                'font-size': textSize,
                'text-anchor': 'middle',
                fill: color
            })
            .text(labeltext);
    };
});

One thing I want to point out here is that textSize variable. I was having a bit of trouble with a bunch of labels piling on top of each other at one particular spot on the map, because apparently once a feature is off the map, its bounding box coordinates become the negative of half the corresponding dimension of the SVG. So I just shut those labels off. There’s also a line that sets the areaTextSize to 0 if the scale is less than 1:500,000 to avoid cluttering up the map too much. There’s a similar D3 .each() loop to adjust the labels each time the map is moved or resized that I’m not showing here.

I also wanted to add labels to the village symbols on the map. But these are actually raster icons, not part of the Leaflet overlays SVG. First I played with just a straight JS loop that would use jQuery to grab the icons and plunk absolutely-positioned divs or spans on the map for each icon. This choked the browser. So then my thought was to make Leaflet do the work, creating an L.geoJson layer for all of the labels I wanted, and cooking the labels themselves with a pointToLayer function. The problem here is that the only SVG layers Leaflet creates are paths, and the only non-SVG layers Leaflet creates are icons! No text or any other elements.

So I decided to do something clever. I would trick Leaflet into putting the labels on the map by creating icons with the feature names in an alt attribute and feeding them a bad src URL! Aside from a pesky image 404 error in the Console, this worked great in Firefox. But Chrome annoyingly adds a broken image icon and cuts off the alt text; IE is a little better but still adds an X icon. So finally I decided I just had to add a label class to Leaflet. Fortunately, this wasn’t too hard. I just extended the Icon class with the bear minimum modification to create <span> elements instead of <img> elements:

L.Label = L.Icon.extend({
    _createImg: function (text, el) {
        el = el || document.createElement('span');
        el.innerHTML = text;
        return el;
    }
});

L.label = function (options) {
    return new L.Label(options);
};

Then I just had to instantiate an L.geoJson layer and feed it my label class. Voilá! I magically had village labels on the map! (At scales of over 1:250,000, again to avoid clutter).

Screenshot4_crop.png

I did end up adjusting the CSS just a little with some in-line script and a stylesheet style:

//inline script to adjust label css
$("#printPreview span").css({
    'margin-left': pointTextSize,
    'font-size': pointTextSize,
    'line-height': pointTextSize
});

//css style to create label outlines for improved readability
#printPreview span {
    text-shadow: -1px -1px 0 #FFF, 1px -1px 0 #FFF, -1px 1px 0 #FFF, 1px 1px 0 #FFF;
}

It’s not perfect, but the end result seems to be a readable printed Leaflet map. I hope this has given those of you who want to step outside of our media silos some ideas for further experimentation!

Open Web Mapping: How do we teach this stuff?

Today I gave a talk at the NACIS conference about redesigning the lab curriculum for Geography 575, UW-Madison’s Interactive Cartography and Geovisualization course, to more effectively teach the new Open Web Platform mapping tools that are now the industry standard for web mapping. People seemed to like it.

But the 20-minute (or, in my case, 23-minute…oops) conference talk/slideshow format presents a tradeoff between conveying lots of necessary information and slides that support the talk without textsploding the visual cortex. After paring down to the bare bones of my story, I was left with a number of unavoidably text-heavy slides showing our 2014 curriculum outline and comparing it to the revised sequence for the next iterations of the course. All I could do was point to and say, “don’t read this now, here are the key bits to notice…” People who didn’t see the talk can go back and read the content, but they won’t get the key take-aways I presented verbally (Why is this web mapping workflow thing so important it gets 7 slides? Why are those curriculum topics highlighted in purple?).

So, to fill in the missing pieces of the online slideshow without too much extra labor on my part, here is the outline of my talk. I am in the process of submitting a more detailed journal paper and will post the citation here if it ever gets published.

Open Web Mapping: How do we teach this stuff? NACIS 2015 talk

  1. Title slide
    1. Intro self
    2. Talk based on my experience as a curriculum planner and TA for G575
    3. 4-credit course on cartographic UI/UX design taught by Rob Roth
    4. 2 hr per week lab component
    5. Will talk about
      1. Desired learning outcomes for course
      2. Curriculum redesign to match new teaching technology stack
      3. Evaluation process
      4. What comes next
  1. Desired Learning Outcomes
    1. First two outcomes corresponded to what were initially three lab assignments: animation, sequencing and retrieve interactions, and geovisualization
      1. Eventually merged first two lab assignments into one Leaflet lab without animation requirement
    2. Third outcome informed by final project—collaborative start-to-finish work experience with real world scenario chosen by students.
    3. Fourth and fifth outcomes “soft” outcomes that come with learning tools
      1. Translate across technologies
      2. 4: necessary cognitive development to apply technologies
      3. 5: best practices for design & development informed by research
  1. Technology stack I
    1. Taught with Flash until 2012, then moved to open web mapping technologies as Flash was dumped as industry standard
    2. Flash: encapsulated IDE with seamless integration of design software and well-featured native scripting language
  1. Technology stack II
    1. Open web platform technology stack is bigger and more unwieldy
    2. Job no longer to teach students 1 or 2 pieces of software to make great maps; is to teach integration of dozens of platforms, languages, libraries, frameworks to make great maps
    3. Major unanticipated consequences for teaching techniques
    4. Didn’t put much thought into restructuring curriculum initially; kept everything but lab technologies the same
    5. Results in 2013 were very mixed
      1. Award-winning maps
      2. Many students struggled, especially with learning D3
  1. Web Mapping Workflow I
    1. Rich dissertation: idea of Web Mapping Workflow to describe process of creating a web map on Open Web Platform
    2. Ideally, Workflow should inform scope and sequence of our teaching
      1. Scope: what is taught
      2. Sequence: in what order we teach it
    3. My concept of workflow is adapted, not identical to Rich’s
    4. First step in workflow is to design web map.
      1. Mostly teach this in G572, Graphic Design for Cartography; but included as part of final project
  1. Web Mapping Workflow II
    1. Second step is to set up a development environment—akin to a workshop with hammer, saw, drill, etc.
    2. Initially treated this topic lightly, but adding more structure to it
  1. Web Mapping Workflow III
    1. Third step is to find and format data
    2. Always takes longer than expected—often most time-consuming stage
    3. Proved to be tricky for students; required more attention
  1. Web Mapping Workflow IV
    1. Fourth step is creating basic Markup that forms backbone of the web page
    2. Can be seen as the space for cartographic representation—where the actual elements of the web map exist
  1. Web Mapping Workflow V
    1. Fifth step is script
    2. Fourth and fifth steps highly integrated; dynamic markup created by the script
    3. Script is necessary for adding cartographic interaction
  1. Web Mapping Workflow VI
    1. Sixth step is fine-tuning
    2. Could include debugging, but more usability evaluation and feedback
    3. Haven’t had time to teach usability evaluation; do now emphasize debugging more
  1. Web Mapping Workflow VII
    1. Final step is Deployment
    2. This has always been included in course, but relatively minor
    3. Used to be in-house, now must be off-site
  1. 2014 Curriculum Sequence
    1. Based on 2013 experience, designed a new curriculum sequence
    2. Heavy direct instruction of key concepts first couple weeks
    3. Workflow stages shown by letters
    4. Workflow stages not taught sequentially, but progressively more integrated over time
  1. How well did it work?
    1. Two reasons for wanting to assess the course:
      1. Did we meet the desired learning outcomes?
      2. Where were the sticking points? Threshold concepts (Bampton)
    2. Used four tools:
      1. Entrance survey to find out where students were at
      2. Instructor logs for qualitative observations
      3. Student extra credit assignments narrating stumbling blocks and aha moments
      4. Exit survey—gave most of quantitative data on student learning
  1. Entrance Survey
    1. Weighted toward the bottom of familiarity with open web technologies
    2. Students rated themselves even less familiar initially in exit survey
    3. Course teaches computer science concepts to cartographers and design majors—would be a very different class if we were teaching cartography to computer scientists
  1. Instructor Logs
    1. Critical practice for reflective teaching—highly recommended
    2. Had to teach to different learning speeds for different groups of students
    3. Some difficulties we didn’t think students would experience
    4. Teaching D3 the most successful outcome of course in comparison to 2013
      1. Most likely due to teaching data first and highly structured D3 lessons
  1. Student Feedback
    1. Helped highlight some misconceptions students held, such as underestimation of time, and threshold concepts, such as object-oriented programming
    2. Online examples could be both helpful and troublesome depending on how well explained and how close they fit lab scenarios
    3. Students displayed increasing understanding both of open web concepts and of what they did not yet know
    4. Filled with evidence for new computational thinking skills (quote)
  1. Exit survey
    1. Self-reported expertise with tools we taught increased from low to moderate, statistically significant
      1. No increase in open web tools we didn’t teach
    2. Asked students to rate overall emotional experience; average increased with each assignment
      1. No very negative responses to D3 lab
  1. Learning outcomes
    1. Which of the desired learning outcomes were demonstrated?
      1. Leaflet—vast majority of students got passing grade, rated their knowledge of Leaflet higher than beginning
      2. D3—same plus positive, confidence-building experience
      3. Final projects—all passing; a few students completed on their own; some professional quality
      4. Cognitive development—demonstrated in feedback, logs, exit survey
      5. Concept integration—harder to say; students rated concept transfer highly in exit survey but relied on lecture material little
        1. Room for improvement
  1. Topic sequence I
    1. In exit survey, asked students to reorder any topics they thought were out of order in course sequence
    2. Boxplots show ranges, quartiles, medians; circles show actual topic location in sequence
  1. Topic sequence II
    1. Most topics had median close to actual location
    2. Notice “using developer tools” and “GitHub” had extremely low median compared to actual location in sequence
      1. Foundational threshold concepts—should have come earlier in sequence
  1. Topic sequence comparison I
    1. We will be re-teaching 575 in-house this Spring, online in Spring 2017
    2. My summer job was to write lab modules for the online version
    3. I reworked module order based on assessment results
    4. Will also be used as basis of topic order for this spring’s residency course lab
  1. Topic sequence comparison II
    1. This shows where some of the key threshold concepts were in 2014 iteration
    2. Each topic expanded into multiple topics with more structure
    3. Some non-helpful topics eliminated
    4. Advantage of online: students work at own pace. Disadvantage: harder to give individualized assistance or review remedial concepts
      1. Entire online curriculum more structured by necessity; need not be for residency course
  1. Topic sequence comparison III
    1. This slide shows the topics students identified as coming too late in the sequence
    2. Developer Tools moved to Module 2
    3. GitHub mostly moved to Module 1, but separated between development platform aspects of Git and GitHub.com and deployment aspects of GitHub.io
      1. Will use it as main platform for project storage, versioning, and grading
    4. No final project in online course due to limitations of collaboration over vast distances
  1. Thank you
    1. This has been a bit about our experiences redesigning our G575 lab curriculum to better teach new open web platform technologies
    2. Github URL is source for published versions of the course lab assignments
    3. Student projects
    4. Happy to take questions.
  2. Bonus slides!!! Check out my students’ work!

Connecting PostGIS to Leaflet using PHP

For a few years now, I’ve been building wikimaps that rely on a PostgreSQL/PostGIS database to store geographic data and Leaflet to display that data on a map. These two technologies have increasingly become the industry standard open-source front- and back-end web mapping tools, used together by such behemoths as Openstreetmap and CartoDB. While you can use a go-between such as Geoserver Web Feature Service (WFS) to connect the two, the most simple, flexible, and reliable way I’ve found to connect data to map is through a little PHP script that essentially formats the queries and lets PostGIS and JavaScript do all the heavy lifting (note that my opinion on this has changed since I wrote my series of tutorials on web mapping services two years ago).

It occurred to me recently that I should share my basic technique, and I did so for UW-Madison Cartography students in a short presentation as part of our Cart Lab Education Series. This blog post is essentially a transcription of that tutorial. It assumes you have already installed Postgresql with the PostGIS extension and pgAdminIII GUI (I highly recommend installing all three through the Stack Builder), and possess a working understanding of SQL queries, HTML, JavaScript, and Leaflet.js. I will gently introduce some PHP; this shouldn’t be too painful if you already have a bit of background in JS.

Let’s get started, shall we?

I have provided the tutorial sample code on GitHub. A colleague just introduced me to the wonders of Adobe Brackets, so let’s use it to take a look at the directory tree first:

Directory Tree

As you can see, I’ve provided a data folder with a complete shapefile of some example data I had lying around. This open-access data is frac sand mines and facilities in western Wisconsin, and comes from the Wisconsin Center for Investigative Journalism. The first step is getting the data into a PostGIS-enabled database using pgAdminIII’s PostGIS Shapefile and DBF Loader (enabling this plug-in is slightly tricky; I recommend these instructions). After you have created or connected to your PostGIS database, select the loader plug-in from the pgAdminIII Plugins menu. Click “Add File”, navigate to the data directory, and select the shapefile. Make sure you change the number under the SRID column from 0 to 26916, the EPSG code for a UTM Zone 16N projection. PostGIS will require this projection information to perform spatial queries on the data. Once you have changed this number, click “Import”.

PostGIS Shapefile and DBF Loader

With your table created, we can now move to the fun part—code! For formatting clarity, I have only included screenshots of the code below, and will issue a reminder that the real deal is posted on GitHub here. I’ll only briefly touch on the index.html file and style.css files. Within index.html are links to the jQuery, jQuery-ui, and Leaflet libraries. I am mainly using jQuery to facilitate easy AJAX calls and jQuery-ui to create autocomplete menus for one of the form input text boxes. Leaflet of course makes the map. There are two divs in the body, one for the map and one for a simple form. The most useful thing to point out here is the name attributes of the text input elements, which will become important for use in constructing the SQL queries to the database.

html snippit

Style.css contains basic styles for placing the map and form side-by-side on the page, and bears no further mention.

main.js snippet

Turning to main.js (above), I have defined three global variables. The first, map, is for the Leaflet map. The second, fields, is an array of field names corresponding to some of the many attribute fields in my fracsandsites table in the database; this is the attribute data I want to see in the pop-ups on the map (other fields may be added). The third variable, autocomplete, is an empty array that will hold feature names retrieved from the database for use in the autocomplete list.

The screenshot above shows the first two functions defined after the global variables, with a $(document).ready call to the initialize function. This function sets the map height based on the browser’s window height, then creates a basic Leaflet map centered on Wisconsin with a simple Acetate tileset for the basemap. It then issues a call to the getData function. Here’s where the fun really begins.

The jQuery.ajax method is a very simple substitute for a whole lot of ugly XMLHttpRequest native code. It can take data as a string of parameters in URI scheme or as a JavaScript object; I’m using the latter because it is neater. You can include any parameters, but the important part is to think about what you need out of the DOM to create the SQL query that’s going to grab your data. I’m designating the table name and the fields here, although you could also hard-code both in the PHP if you don’t need them to be dynamic.

OK, let’s flip over and see what’s going on in getData.php…

php snippet

If you’re not used to seeing PHP code, some things here may look a bit odd. The first two lines declare that what follows is php code for the interpreter and enable some feedback on any I/O errors that occur. PHP is very picky about requiring semicolons at the end of each statement that isn’t a control structure (open or closing curly brace), and a syntax error will cause the whole thing to fail silently despite line 2. Lines 5-9 assign the database credentials to variables, which are denoted with the dollar sign (unlike JS, there is no var keyword equivalent). Make sure to change these to your own database credentials. On line 11, the $conn variable is assigned a pg_connect object, which connects to the database using the parameters provided above. Note that in PHP, there is a difference between double and single quotes: both denote a string, but when using double quotes you can put variables directly into the string without concatenation and they will be recognized as variables by the interpreter, rather than as string literals. The following if statement tests the integrity of the connection and quits with an error if it fails.

One important thing to note here is that for this to work, you must already have PHP installed and enable the php_pgsql extension by uncommenting it in your php.ini file, which is stored in your PHP directory (probably somewhere in Program Files if you’re on a PC). You can get PHP here.

Lines 18 and 19 retrieve the data sent over from the $.ajax method in the JS. $_GET is a special designated variable in PHP that is an array of parameters and associated values submitted to the server with a GET header (there is also one for the POST header). In PHP, an array is analogous to both an object and an array in JavaScript; it’s just that the latter form uses zero-based sequential integers as keys. In this case, we can think of the $_GET array as just like the AJAX data object, with the exact same keys and values (table with the string value "fracsandsites" and fields with its array of string values). Line 18 assigns the first to a new PHP $table variable and line 19 assigns the second to a $fields variable.

Since $fields is another array, to use it in a SQL query its values must be joined as comma-separated values in one string. The foreach loop on line 23 does this, assigning each array index to the variable $i and each value to the variable $field. Within the loop, each variable is concatenated to the $fieldstr variable (the . is PHP’s concatenation operator), preceded by l. because the SQL statement will assign the alias l to the table name (why will become clear later).

After all fields have been concatenated, a final piece is added to the $fieldstr: ST_AsGeoJSON(ST_Transform(l.geom,4326)). This is the first bit of code we’ve seen that is specifically meant for PostGIS. We want to extract the geometry for each feature in the table in a form that’s usable to Leaflet, and that form is GeoJSON. Fortunately for us—and what makes PostGIS so easy to use for this purpose—PostGIS has a native method to translate geometry objects stored in the database into GeoJSON-formatted strings. ST_AsGeoJSON can simply take the geometry column name as its parameter, but in order for the data to work on a Leaflet map, it has to be transformed into the WGS84 coordinate reference system (unprojected lat/long coordinates). For this purpose, PostGIS gives us ST_Transform, which takes the geometry column name and the SRID of the CRS into which we want to transform it (In this case, the familiar-to-web-mappers 4326).

At this point, we now have all of the components of our first SQL query (line 31). If you were to print (or echo in PHP parlance) the whole thing without the variables, you would see

$sql = "SELECT l.gid, l.createdby, l.featname, l.feattype, l.status, l.acres, ST_AsGeoJSON(ST_Transform(l.geom,4326)) FROM fracsandsites l";

And, in fact, if you copied everything inside the quotes into the SQL editor in pgAdminIII, you would get a solid response of those attributes from all features in the table. Go ahead and do it. DO IT NOW!

sql editor output

For now, I’m going to skip the next few lines (we’ll come back to them later) and wrap up my PHP with this:

PHP snippet

Line 45 tests for a response from the database, but also sends the query to the server using the pg_query method and assigns the response to the variable $response. The while loop on lines 51-56 retrieves each table row from the $response object (note: this is emphatically not an array; hence the use of the pg_fetch_row method) and echoes each attribute value, with the attribute values separated by comma-spaces and the rows separated by semicolons. As previously mentioned, PHP’s echo command “prints” data, in this case by sending it back to the browser in the XMLHttpRequest response object.

At this point we can go back to the browser and look at what we have. If you’re using Firebug, by default it will log all AJAX calls in the console, and you can see the response once it’s received. You should be able to see something like this:

Response in the console

Now all we have to do is process this data through a bit of JavaScript and stick it on the map. Easy-peasy. I’ll start with the first part of the mapData callback function:

js snippet

Lines 39-44 remove any existing layers from the Leaflet map, which isn’t really necessary at this stage but will become useful later when we implement dynamic queries using the HTML input form. For now, skip down to Line 47 and notice that we are starting to build ourselves a GeoJSON object from scratch. This is really the easiest way to get this feature data into Leaflet. If you need to be reminded of the exact formatting, open any GeoJSON file in a text editor, or start making one in geojson.io. Once we have a shell of a GeoJSON with an empty features array, the next step is to go ahead and split up the rows of data using the trailing comma-space and semicolon used in getData.php to designate the end of each row. Since these are also hanging onto the end of the last row, once the data is split into an array we need to pop off the last value of the array, which is an empty string. Now, if you console.log the dataArray, you should see:

dataArray in console

Now, for each row, we need to correctly format the data as a GeoJSON feature:

js snippet

Each value of the dataArray is split by the comma-spaces into its own array of attribute values and geometry. We create the GeoJSON feature object. The geometry is in the last value in the feature array (d), which we access using the length of the fields array since that array is one value shorter than d and therefore its length matches the last index of d. properties is assigned an empty object, which is subsequently filled with attribute names and values by the loop on lines 69-71. The if statement on lines 74-76 tests whether the feature name is in the autocomplete array, and if not, adds it to the autocomplete array. Finally, the new feature is pushed into the GeoJSON features array. Lines 82-84 activate the autocomplete list on the text input for the feature name in the query form. If you were to print the GeoJSON to the console and examine it in the DOM tab, you should see:

the geojson in the DOM tab

Now that we have our GeoJSON put together, we can go ahead and use L.geoJson to stick it on the map.

js snippet

I won’t go through all of this because it should be familiar code to anyone who has created GeoJSON overlays with Leaflet before. If you’re unfamiliar, I recommend starting with the Using GeoJSON with Leaflet tutorial.

This gets us through bringing the data from the database table to the initial map view. But what’s exciting about this approach is how dynamic and user-interactive you can make it. To give you just a small taste of what’s possible, I’ve included the simplest of web forms with which a user can build a query. If you’re at all familiar with SQL queries through database software, ArcMap, etc. (and you should be if you’ve gotten this far in this tutorial), you know how powerful and flexible they can be. When you’re designing your own apps, think deeply about how to harness this power through interface components that the most novice of users can understand. As a developer, you gain power through giving it to users.

As previously mentioned, the form element in the index.html file contains two text inputs with unique name attributes. The first of these is designated for distance (in kilometers), and the second is for the name of an anchor feature. We will use these values to perform a simple buffer operation in PostGIS, finding all features within the specified distance of the anchor feature. Ready to go? OK.

In index.html, the value of the form’s action attribute is "javascript:submitQuery()". This calls the submitQuery function in main.js. Here is that function:

js snippet

We use jQuery’s serializeArray method to get the values from the form inputs. This returns an array of objects, each of which contains the name and value of one input. Then, instead of creating the data object inline with the AJAX data key, we create it as a variable so we can add the serialized key-value pairs to it. This is done through the forEach loop, which takes each object in the formdata array and assigns the name value as a data key and the value value as a data value. Get it? Good. (If not, just console.log the data object after the loop).

With the data object put together, it’s time to issue a new $.ajax call to getData.php. Let’s flip back over and take another look at that. Everything is the same except now we have a few more $_GET parameters to deal with and a different query task. Hence the if statement on lines 34-40:

php snippet

The if statement tests for the presence of the featname parameter in the list of parameters sent through AJAX. If it exists, that parameter’s value gets assigned to the $featname variable and the distance parameter value, multiplied by 1000 to convert kilometers to meters, gets assigned to the $distance variable.

Now for the hard part. Remember our simple SQL statement in which we gave the table and all of its attributes an alias (l) for no apparent reason? Well, the reason is that we now have to concatenate SQL code for a table join onto it. Whenever you do a join in PostgreSQL, each table on either “side” of the join needs its own alias. Since the initial table reference is on the left side of the JOIN operator, I assigned the original table the alias l, for left, and the joined table r, for right. Obvious, huh? Well, maybe not. In any case, the principle is that although both sides of the join reference the same table, Postgres will look at them like they are different tables. This is a LEFT JOIN, meaning that the output will come from the table on the left, and the table on the right is used for comparison.

There are two parts to the comparison here: the ON clause and the WHERE clause. The ST_DWithin statement following ON specifies that output from the left table will be rows (features) within the user-given distance of rows (features) from the right table; since our table is stored in a UTM projection, the distance units will be meters (if it were stored as another CRS, say WGS84, we would have to use ST_Transform on each table’s geometry for it to work). The WHERE clause narrows the right-hand comparison to a single feature: the one named by the user in the input form. Translating to English, you could read this as, “Give me the specified attribute values and geometry for all of the features in the left table within my specified distance of the feature I named in the right table.” Or something like that.

OK, that’s the biggest headache of the whole demo, and it’s over. The features that get returned from this query now go back to the mapData function in main.js. The map.eachLayer loop that removes existing layers from the map now has a purpose: get rid of the original features so only the returned features are shown. The new features are plunked into a new homemade GeoJSON and onto the map through L.geoJson. Here’s an example using a query for all sites within 10 km of the Chippewa Sands Company Processing Plant:

screenshot of query results

That’s it. There’s lots more you should learn about data security (particularly with web forms), PDO Objects, error prevention and debugging, etc before going live with your first app. But if you’ve gotten through this entire tutorial, congratulations—you’re on your way to designing killer user-friendly database-centered web maps.

Update 3/31/2017: I have been getting a lot of comments on this blog post recently requesting help with some error or other a reader is experiencing while trying to implement this tutorial. While I’m flattered the tutorial is getting a lot of attention, I am also very busy with work and family and unfortunately don’t have time to work through users’ issues with the code. Thus, I will no longer be responding to comments on this post. Keep in mind that the parameters and properties used in the examples above are tailored to the example dataset, and many will need to be altered if you’re implementing your own app. Also check that the right PHP extensions are enabled and your database connection info and credentials check out. For further assistance, I highly recommend using StackOverflow, W3Schools, and the PostgreSQL, PostGIS, and PHP documentation.