WebGIS is Fun and So Can You

I’ve written this post to accompany the talk I gave on August 31 to the UW Cartography Lab’s Education Series special two-day workshop in partnership with Mapbox. I was asked to talk about JavaScript and Turf.js. Give my mixed audience, I thought talking about Turf right away would be putting the cart a bit before the horse—first, I needed to build a simple web GIS app that could use Turf. So this 1-hour talk turned into an as-noob-friendly-as-possible walkthrough of building such an app.

To start, here are the slides I used; really, just an outline of my talk.

The link on the second slide goes to a dropboxed zip file containing two directories: one called “initial” and another called “final.” The “initial” directory contains a boilerplate HTML file that I used to begin my live app-building demo and a data directory with two zip files. The “final” directory contains the final app script and data files. I’ll be walking through the “final” version, but you can start with “initial” and try to build it out yourself for practice.

Slide 3 is what I called the “Sam Matthews Mantra.” Sam Matthews is this awesome guy who I used to work side-by-side with in the Cart Lab back in 2012 and now works at Mapbox; his visit to Madison was the original impetus for this “reunion” event. Yesterday, he gave a talk on the basic structure of slippy maps, including the four ingredients: tiles, library, data, and internet. But for the purposes of my tutorial, I modified this mantra a bit (slide 4) to outline the parts of my demo: library, tiles, data, and analysis.

To create my app, I first needed to load some helper libraries. The app uses jQuery to interface with the DOM (Document Object Model, the structure of a website) and facilitate asynchronous communication between the browser and server. For the actual mapping, I used Leaflet, now the most popular open-source JavaScript library for creating web maps. The spatial analysis components that make this a web GIS app uses Turf.js, but I’ll get into that library more later. I call the libraries from remotely hosted sources using some simple script tags in the <body> section of index.html:

JS-Turf-1

Leaflet also requires its own stylesheet, linked in the header:

JS-Turf-2

With Leaflet imported, we can build a basic map with the following JavaScript code within the $(document).ready callback function (this bit of jQuery causes the browser to wait until the entire page is loaded before executing the JavaScript):

JS-Turf-3

Awesome—I now have a Leaflet map; you can tell by the zoom button in the upper-left corner. Note that I am running the app through a Localhost server, rather than double-clicking on the index.html file. Some browsers (Chrome especially) strongly dislike loading data outside of a server, so it’s always best to set one up first. If you don’t want to go through the rigmarole to set up something permanent on your machine, a great temporary solution is to run your app through a preprocessor app such as prepros or codekit.

JS-Turf-4

(The text and “Click me!” button were included in the “initial.html” template).

It’s not really much of a slippy map, though, without some map tiles. Nowadays, I like to choose tilesets for my Leaflet apps that are included in the Leaflet Providers Preview. Leaflet-providers is a small plug-in for Leaflet that lets you use a shorthand tileset name instead of a full tileset URL to create a Leaflet tile layer, but the Preview site is itself a handy tool that gives you the full code for creating the tile layer if you don’t want to download the plug-in. For my purposes in this demo, I just copied and pasted the code for the Stamen Watercolor tileset:

JS-Turf-5

A little explanation of the above: L.tileLayer is a method of Leaflet that creates a tile layer. It takes two parameters: a URL with variables (letters inside curly braces) that are replaced by the library automatically depending on which tiles are called from the server, and an options object with any of several options for the tile layer. The addTo method then adds the new tile layer to the Leaflet map object, contained by the map variable.

With tiles loading, it’s time for data! For the demo, I chose to use two datasets from Natural Earth, the go-to website for popular global datasets covering cultural, physical/natural, and raster themes. The two datasets I chose were States and Provinces and Populated Places, both at the 1:50 million (medium) scale (unbeknownst to either of us before hand, John Czaplewski, who presented after me, chose the exact same datasets for his live PostGIS demo). To prevent any snafus with the Natural Earth site (it was being slow when I tested it the night before), I went ahead and included these two shapefiles in the “initial” demo folder, each in its own zip archive (“states.zip” and “places.zip”).

If you’ve ever worked with GIS software, you probably know what a shapefile is. Really, it’s a collection of several different files with different components of a geospatial dataset (the .shp component contains the geometry data, while other components contain attribute data, metadata, etc.). Shapefiles are a popular standard format for GIS data (though not usually the best one for spatial analysis tasks), but they’re pretty useless for web apps. Instead, the standard geospatial data format for the web has become the GeoJSON format.

A handy tool for converting shapefiles to GeoJSONs is mapshaper, created by UW alumnus and New York Times graphics wiz Matt Bloch. We can import a shapefile to mapshaper by just dragging the .shp part of the file and dropping it on the site, but that will result in only the geometries being converted to GeoJSON without any attributes. We want the attributes, so we need the whole shapefile; luckily, mapshaper lets us import a zip file containing it. Once we’ve uploaded a shapefile, mapshaper should display something like this:

js-turf-6
The populated places shapefile in mapshaper

Mapshaper is a great little program. It allows you to quickly and easily simplify polygon geometries, reducing the file size. In this case, though, all we want it to do is spit the data back out as a GeoJSON. For this, we click on “Export” in the upper-right corner, then choose “GeoJSON” and hit “Export” again. This should cause a .json file to download. To make accessing the data easier, I renamed each file “places.geojson” and “states.geojson,” respectively.

Now, just what is a GeoJSON? It’s a geospatial variant of JSON, which stands for “JavaScript Object Notation.” Essentially, it’s a more picky formatting of a JavaScript “object,” which is really not an object in the true object-oriented programming sense but rather a type of map or dictionary data container. To see what this looks like, neatly formatted, we can import our new GeoJSON data into another handy little web app called geojson.io, an open-source project largely created by Mapbox’s Tom MacWright. Here is our populated places file displayed in it:

js-turf-7

On the right-hand side of the window, you can see the object structure of the file, which consists of nested key-value pairs. Every GeoJSON has a "type" which is always "FeatureCollection", and every one always has a "features" property consisting of an array of features. This will become important when we use Turf.js to operate on the data. Each feature in turn has a "type", which is always "feature", a "geometry" which is an object containing the feature geometry as one or more geographic coordinate pairs (always in the WGS 84 coordinate system), and a "properties" object consisting of the attributes, if any. Note that this is similar to a shapefile in that it doesn’t encode any relationships between features, or topology in GIS speak (there is another web spatial data format, TopoJSON, which does encode topology, but we won’t get into that in this tutorial).

Now that we have our data in the right format, we need to load it into our code and onto the map. Loading data into a JavaScript program is trickier than it sounds, but it’s worth taking the time to show you how to do it right. Some tutorials out there will tell you to just assign a variable to the code in each JSON file and bring it into the site with HTML <script> tags, but if you’re loading geographic datasets (which tend to be large), this has a tendency to bog down the loading of your page. It’s much better to load the data asynchronously, adding it to the page after it has loaded into the script. But this means that the rest of your script will have executed before your data is loaded. Thus, you need a special function called an AJAX callback to make use of your data only after it has been loaded by the browser.

First, we need to make sure our .geojson files are stored in the “data” folder of our working directory. Then, we can use one of jQuery’s many helpful AJAX methods to load the data into our script. Because we have two datasets we need to load, it’s best to load them in parallel (at the same time) and only call the function that uses the data (the callback) after both files have loaded. To do this with jQuery, we can use the $.when method:

js-turf-8

Note that the two $.getJSON methods are actually parameters of the $.when method, so there should be a comma between them, and no semicolons in between or after them (I ran into trouble with this both in practicing for the demo and doing it). Each one of these .getJSON methods calls a data file and then executes a separate callback for that file, which saves the file’s data to a property of an object I created previously (data). Finally, the .then method calls the overall callback function after the data has loaded, which I’ve named addData. Now, I’ve put a few carts before the horse here; let’s back up and take a look and where I define the data object and the addData function, above the AJAX call:

js-turf-9

Again, this code is above the $.when method in the script. Here, I’m first defining two objects: data, which (as we have already seen) will hold the GeoJSON data, and dataLayers, which will hold Leaflet’s rendering of that data into layer objects that can go on our map.

Then I define the callback function, addData. By the time this function executes, the $.getJSON callbacks have already saved each file’s GeoJSON data to properties of the data object, so I can go ahead and take a look at the structure of that object in the console and see that my data is indeed present:

js-turf-10
GeoJSON objects neatly formatted in the Firebug console

Now that I have this data, I can use Leaflet’s L.geoJson method to stick each layer on the map. This method takes two parameters: the data I want to turn into a map layer, and an options object that can hold a number of different layer options. For the states layer, I’ve given it some style options to override Leaflet’s defaults. For the places layer, I’m using the pointToLayer option to create a function that iterates over each point feature and turns it into a Leaflet circleMarker, which I have styled to look like a moderately-sized black dot. Each L.geoJson method is chained to the .addTo(map) method to add it to the map, and the resulting layer object is assigned to a property of the dataLayers object I created above the addData function, allowing for later access.

Here is what my map now looks like:

js-turf-11

 

With data on the map, we are ready for the fourth and final step, which makes this a true WebGIS: analysis. Now, as you can see in my HTML and the image above, I have included a large “Click me!” button in the boilerplate for the app. A good WebGIS should be interactive; you want to allow your users to perform operations on the data, not just do what you think they want to do for them. Since this is a simplified demo, I figured I would just include one button instead of several to demonstrate the concept. At the end of the tutorial, each click of the button will do something different and interesting to the data.

Before we get there, to keep our code neat and make sure the analysis only gets performed after the data is loaded, we need a new function called from within addData to put our analysis tasks in. I’ve called this function analyze, and pass it the two objects I created, data and dataLayers. If you’re working from the “initial” index.html file, you will want to move the $('#mybutton').click listener and clickme callback function inside of this analyze function. Inside analyze, we will perform three types of analysis using Turf.js: a point-in-polygon test, creation of a bounding box, and creation of a triangulated irregular network (TIN). To have our button do each of these in turn, we will create a counter and increment it each time the button is clicked, calling a different analysis function for each counter value.

js-turf-12

Before we go further, the thing to know about Turf is that its methods operate very much like toolboxes in ArcGIS: you put one or more layers in and you get a new layer out. The big difference is that in this case, each layer is in GeoJSON format, either an individual feature or an entire FeatureCollection. Turf includes dozens of helpful analysis tools that can all be run client-side in the browser. This gets an A+ for convenience and interactivity, but a C or D for performance. If you’re using big data or have a complex series of tasks to run, fire up Arc or Q or Python and skip the JavaScript.

Now, let’s create our point-in-polygon function. A point-in-polygon test is a classic problem in computational geometry and has all sorts of applications in GIS. Turf’s .within method accomplishes this test. It takes two parameters—a set of points and a set of polygons—and returns a new FeatureCollection containing just the points that are within the polygons. So, say we want to find the populated places within the U.S. lower 48 states. Since our states dataset has states and provinces for other countries as well, we will first have to pick out only U.S. states that aren’t Alaska or Hawaii and add them to the features array of a new FeatureCollection. We can do this with a piece of native JavaScript, a .forEach loop:

js-turf-13

Now that we have our subset of states—stored in the usStates variable—we can use it to perform our point-in-polygon test, and view the results in the console:

js-turf-14

js-turf-15

There are 94 populated places within the U.S. Lower 48 out of our original dataset. To put these on the map, we can simply create a new L.geoJson layer (this time with red dots) and add it to the map. We can also replace the places component of our data object with the new dataset, so that our next two analysis operations are only operating on the U.S. places.

js-turf-16

Now when we click on the “Click me!” button, we should see this result:

js-turf-17

That was actually the hardest Turf analysis I got to in the demo. I wanted to get the tough one out of the way first, I guess. The next two are much more simple. First, the bounding box:

JS-Turf-18.png

This uses Turf’s .envelope method to return a polygon encompassing all vertices. Once again, it makes a call to L.geoJson to plunk the bounding box onto the map. Voilá:

js-turf-19

Finally, we’ll create a TIN using our U.S. Places as the input dataset. Turf’s .tin method takes the dataset and optionally the name of an attribute that can be used as a z value for each vertex. This results in polygons that have three properties: a, b, and c, the z values. We can use this data to shade the triangles; in this case, I chose to calculate the averages of the three values and use each polygon’s average to derive its percentage of the highest average z value in the dataset. I then set this percentage as the opacity of the polygon to make the data visible.

js-turf-20

Here is the result (after three button clicks):

js-turf-20

That’s about it for this demo. Of course, there are lots of ways to use Turf that don’t involve Leaflet; since it speaks GeoJSON, it’s compatible with a wide variety of other libraries and frameworks. Hopefully this has been a useful intro to open source WebGIS tools and inspires you to go do something cool.

Carbon Emergency Infrastructures

The following post contains the transcript and images from a talk I gave at the UW-Madison Geography Symposium a couple days ago. Since I wrote the whole thing out, I thought I would go ahead and share it here.


 

The Carbon Pollution Emergency Act of 2022 has been heralded by historians as the first bold step against global warming taken in the United States. It implemented a heavy carbon tax on all fossil fuels and progressively restricting the amounts of coal, oil, and natural gas that could be produced or imported each year. The Act made it official federal policy to reach 100% renewable electricity generation by 2050. Proceeds of the carbon tax and a reduction in military spending were used provide 90% rebates on small-scale solar and wind energy systems for homes and fund the replacement of fossil fuel electricity plants with wind and solar farms. In the transportation sector, the Act placed a moratorium on the building of new roads and airports and boost funding for mass transit systems by over 1,000%…

 Such a scenario might seem far-fetched today. But not much is invented without imagining it first. Think of all the tablets and cell phones we have now, even 3D printers—all technologies that were dreamed up on Star Trek in the sixties. If we can dream it, we can do it. And I’ve been dreaming. And since I’m a cartographer, my dreams look like maps. The focal point of my dreams up to this point has mostly been the transportation sector, since it’s big, it’s visible, and it entails nifty-looking machines that go “vroom!”

caltrain
Vroom!

I was inspired to share some of my dreams with you by some conversations during the CHE Symposium about the concept of infrastructure, and the question of whether nature can be conceived of as infrastructure. While I don’t see nature as a form of infrastructure, infrastructure does have a large role in shaping nature. This is particularly true of transportation infrastructure, as it has reshaped much of this country’s landscape and is our second-largest source of greenhouse gas emissions, after electricity. So come along and dream with me for the next few minutes about what a transportation future with a lighter carbon footprint could look like.

I want to start here:

Cuba_Hwy
Take a long walk off a short bridge?

This is the main east-west highway in Cuba. When I visited Cuba in 2005, we traveled part of this highway in a beat-up old school bus. We passed a number of unfinished bridges like this one, along with interchanges with dirt exit ramps leading to nowhere. After the Soviet Union collapsed, Cuba underwent a Carbon Emergency, what they call the Special Period. Overnight, they no longer had an overpaying buyer of their sugar and tobacco exports, so they no longer had money to import fossil fuels. Driving suddenly became very expensive. What you unfortunately can’t see in this picture, and I couldn’t find a good picture of, is all of the bicyclists, pack animals, and hitchhikers that I witnessed making use of the four-lane expressway. While Cuba is slowly being reintegrated into the fossil fuel economy, it still serves as a model for what a post-carbon or largely post-carbon society could look like. And it’s not that bad. The point I want to make here is that this highway is not abandoned, just repurposed. It changed my way of thinking about post-carbon infrastructures. New stuff takes more energy to build, and at least in the near term, that necessarily means it needs more fossil fuels and other non-renewable resources. We shouldn’t necessarily be thinking of how to build shiny new things, but rather how to repurpose the immense and under-cared-for infrastructures we already have to utilize them without fossil fuels.

MadisonRapidTransit_maponly

The first cartographic imaginary I created along these lines involved a commuter rail system for Madison. I designed this map over the summer of 2014. Railroads were the first nationwide transportation infrastructure designed to move people and goods quickly over land, so this would be a renewal of an old idea rather than a new idea. At least 60 percent of this proposed urban rail transit network would use existing railroad rights of way.

MadisonRapidTransit2

Conducting a bit of GIS analysis, I found that this system would put about 21% of the Dane County population and 47% of the county’s jobs within a half-mile of a station. The system-wide use would be higher still when considering the network of park-and-rides and bus transfer points that would allow it to interface with other forms of surface transportation.

This brings up an important point: the goal of a carbon emergency infrastructure cannot simply be to replace fossil fuel infrastructures, as this is not realistic in the near term. It must interface with them and make it convenient for humans to shift their habits away from heavily consumptive transportation. Imagine how much quicker and easier it would be to take a train from home in Schenk-Atwood or South Park Street to campus than drive a car or cram onto a bus. Young people, old people, and anyone else without a driver’s license would have more freedom to move and more jobs accessible to them. Mass transit is a racial justice issue as well, as those in the black community are less likely to have driver’s licenses than average due to poverty and institutional discrimination. Economists who think about mass transit say that it creates “positive externalities,” or virtuous feedback loops that benefit society at large. One of these is the “Mohring Effect,” the observation that better mass transit service creates more demand, which in turn increases the frequency of service, reducing travel times, creating more demand, and so on.

After completing the urban commuter map, I began to wonder how I might extend a vision for better mass transit outward to rural Wisconsin. Where I went to college, up north, we were fortunate enough to have a regional bus system with one route that ran every two hours on weekdays. This is to say, it was a valiant effort with the very limited funding available, but not very useful to your average commuter. Most parts of the state don’t even have that. I first thought of rebuilding the railroads, as per my Madison transit idea. But the extensive railroad network that used to exist in Wisconsin has largely gone to pot and would cost billions to rebuild. Why not pick up some lower-hanging fruit?

WisconsinBusRoutes
Potential bus routes in northern Wisconsin

During trips out to the Olympic Peninsula of Washington State, I used their regional bus system, which has more frequent service and quite effectively connects isolated towns across multiple counties. I thought, Wisconsin can do that, and better. So for the past year, in my very limited spare time, I have been working on concocting a transit map with the premise of bus routes with hourly service on every federal and state highway in Wisconsin. This infrastructure uses the road base that already exists, but reimagines it as a more efficient and less deadly people-carrying network. Rides would be pooled onto fast, clean electric buses with professional, sober drivers. Regular bus service, especially in the evenings, would reduce the epidemic of drunk driving in rural Wisconsin, where every burgh has a bar or two and driving is currently the only way to get home. The roads themselves need less maintenance, as fewer cars and trucks create less wear and tear on the pavement. The bus system brings freedom of movement and opportunities for breathing new economic life into the impoverished countryside.

 

proterra-bus
Get on the bus!

The primary investment needed would be in the moving parts of the infrastructure, the buses. These could be made all-electric using newer battery and fuel cell technologies, or run on cleaner forms of biodiesel, or some combination. They could largely be made out of recycled metals and plastics from decommissioned war machines and old pop bottles and grocery bags. Some nonrenewable resources would still be required.

beach-1170

DevilsLake
Let’s go play in the park!

Mass transit need not only be used for commuting to and from employment. Recreation has a place in our imaginaries too. And like access to jobs and other privileges that come with mobility, access to recreational and relaxation opportunities can and should be enhanced by mass transit. With a relatively small grant, the City of Madison could begin offering round-trip service to nearby state parks on summer weekends, making these oases accessible to young people and low income folks who can’t afford the gas and the park entry fee.

npbus
Passengers board a shuttle bus in Zion National Park. National parks provide shuttle buses to alleviate automobile congestion and reduce air pollution.

 

In this imaginary, city buses not in use during the more limited weekend service routes would be driven by Metro drivers who want to earn overtime pay while enjoying some time away from the city themselves. The buses begin from downtown, stopping at outlying transfer points for greater convenience, and spend the day traveling to and around the recreation area before returning home in the evening. From Madison, Parkbus routes could service a number of parks and recreation areas within an hour’s drive on different weekends throughout the summer, including Devil’s Lake, Blue Mounds, Governor Dodge, Kettle Moraine State Forest, and Wisconsin Dells. Service to the same area on both Saturday and Sunday facilitates overnight campouts. Urban citizens have the opportunity to relax and rejuvenate in nature without having to drive there. The most crowded parks, like Devil’s Lake and Governor Dodge, no longer have to contend with paving over more of their land as parking lots, and their air is cleaner too, all because urbanites have the option of taking the bus instead of driving.

IBX

To close, I want to take us back home to Madison and consider the one form of wheeled transportation that is closest to being carbon-neutral. That, of course, is biking. Madison is already a great place to commute by bicycle. It is currently ranked the 7th-most bike friendly city in the U.S. by Bicycling Magazine. On the other hand, we’re 7th, just behind hill-infested San Francisco! Madison can do better!

Tourism_MononaTerrace
Cyclists ride past Monona Terrace on the Capitol City Bike Path. Image from Shifting Gears, Wisconsin Historical Museum

One way to encourage more bike commuting is to improve existing commuter routes while reducing the convenience of driving. The most heavily used bike corridor in Madison, the Capitol City Path across the Isthmus, sees close to a thousand cyclists a day on average during peak season. But it requires frequent stops for cross-traffic on very minor streets, as well as tedious and dangerous crossings of two of the city’s busiest intersections. To get to campus that way, you have to go out of the way along Monona Bay before turning north. On my own commute to campus from the East Side, rather than deal with this detour, I ride Gorham and Johnson streets, which have very heavy car traffic and are downright treacherous in winter. But what if the existing bike corridor were re-envisioned as a “no-stop zone” for bikes, the nation’s first bicycle expressway? Think about the new raised pedestrian crossing on Park Street at the end of Library Mall. Why can’t such crossings be added to the existing bike path to give cyclists a smoother ride? On local streets, cars should be made to stop for bikes instead of the other way around. A cut-through bike path could be constructed alongside the train tracks that cut the corner from Broom Street to West main.

IBX_south_bridgeIBX_north_bridge

IBX_wilson_st

Two new bicycle overpasses would carry cyclists quickly and safely across John Nolan Drive at North Shore and across Williamson Street at the Blair/John Nolan intersection, as well as a reconfiguration of the current “bike boulevard” along East Wilson Street to put bikes in a partitioned express track.

 

First-Settlement-Park-Overview
Plans by the Madison Design Professionals Workgroup for a covered-over Blair Street/John Nolan Drive

It turns out I am not the only person thinking about such things. The Madison Design Professionals Workgroup recently put forward a proposal to cover up John Nolan Drive to improve local pedestrian, bike, and rail connections between downtown and the Isthmus. Their plan is driven more by aesthetics than sustainability and would use a lot more nonrenewable resources, but does incorporate bicycle and commuter rail components. It could easily include my vision for a bike expressway (and if the designers are smart, I think they will). Of course, this plan is much more thought out than mine, and by people who actually get paid to do this stuff. To date, all of my doodling has been stuff I’ve daydreamed in my spare time. But who knows? Even daydreams can sometimes make their way into the world.

Printing in Leaflet

It’s been a while since I posted anything to this blog, but that doesn’t mean I haven’t been busy. I’ve been having all kinds of adventures working with Leaflet and making it do interesting things it probably wasn’t intended for. I’ll try to catch up with writing about some of these over the next few posts.

My most recent triumph involves printing a Leaflet map. Now, I know what you’re going to say: Why would you print a Leaflet map? Aside from the snarky answer why not?, the project I’m working on requires a map that is both interactive and can go where there are no mobile devices or internet access, and that requires printing.

Before I get into the technical stuff, I want to briefly expound on the broader implications of what I’m about to cover. We in the cartography world are generally split down the middle when it comes to media: either you make static maps for print or you make interactive web maps. Generally, the only crossovers are static maps that get plopped online Web 1.0-style, as images or PDFs. I think it’s high time we start thinking about transcending these media silos with our maps. Like, can you print an SVG graphic generated by D3? Sure you can, but can you control the scale at which it prints and make it look good? Similarly, how do we make zoomable, panable slippy maps, with all the advantages those entail for web users, and make them printable as a resource for those who need to draw on top of them or pick them up and take them where reliable internet access doesn’t exist?

The specific map I’ve been working on for a little over a year now is wikimap of data collected in eastern Senegal, which will enable trusted users with local knowledge to edit the data and contribute new data. One of the requirements of the application is that it be printable as posters to take to village meetings. The map utilizes an underlying satellite imagery tileset, a custom tileset with the polygon and line data (hosted with Tilestrata on Amazon AWS, which is a whole other blog post waiting to happen), and the point data as overlays added with your typical L.geoJson calls.

Screenshot1.png
Couloirs Transhumance,
a map of herding routes in eastern Senegal

One challenge is that the map covers such a huge geographic area and includes so much data that a printed version of the entire thing would be unintelligible unless printed on a very large poster. Thus, users need to be able to choose both the scale of the map they print and the paper size, and they need to be able to preview what they’re going to print. So I built a print preview window.

Screenshot2_crop
Teh Printerface!

First of all, notice there are no satellite image tiles on the map. Satellite images are basically photos (but from spaaaaaaaace). Have you ever tried printing a photo at 72 dpi? It looks. like. crap. Likewise, raster tilesets look like crap when printed. Ditch ’em.

 

But I still wanted to take advantage of Leaflet’s smooth interaction capabilities to allow the user to control the map view that they’re going to print. Thus, I created a new Leaflet map with no base layer and L.geoJson overlays for all of the mapped data, including the two-dimensional features that are burned into my custom tileset on the main map. When there are thousands of SVG paths and raster icons on the map, it slows things down a bit. So I had to kill scroll wheel zoom and get the zoom buttons off the map anyway, since they’re not going to be present on the final printout. Hence the scale bar, which represents the Leaflet zoom levels as tics on a line to give users an idea of how close they are to the minimum or maximum zoom.

You’ll notice that under the scale bar is an actual, honest-to-godess ratio scale, which applies to the printed map. Okay, so there’s like, A LOT of math behind this, because the map scale varies based on latitude, zoom level of the map, page size, and the size and shape of the preview window. Here’s the code:

function adjustScale(){
    //change symbol sizes and ratio scale according to paper size
    var prevWidth = $("#printPreview").width();
    var prevHeight = $("#printPreview").height();
    var longside = getLongside();
         
    //find the mm per pixel ratio
    var mmppPaper = prevWidth > prevHeight ? longside / prevWidth : longside / prevHeight;
    var mapZoom = printPreviewMap.getZoom();
    var scaleText = $("#printBox .leaflet-control-scale-line").html().split(" ");
    var multiplier = scaleText[1] == "km" ? 1000000 : 1000;
    var scalemm = Number(scaleText[0]) * multiplier;
    var scalepx = Number($("#printBox .leaflet-control-scale-line").width());
    var mmppMap = scalemm / scalepx;
    var denominator = Math.round(mmppMap / mmppPaper);
    $("#ratioScale span").text(denominator);
    $("#previewLoading").hide();
    return [mmppMap, mmppPaper];
};

function getLongside(){
    //get longside in mm minus print margins
    var size = $("#paperSize select option:selected").val();
    var series = size[0];
    var pScale = Number(size[1]);
    var longside;
    if (series == "A"){ //equations for long side lengths in mm, minus 10mm print margins
        longside = Math.floor(1000/(Math.pow(2,(2*pScale-1)/4)) + 0.2) - 20;
    } else if (series == "B"){
        longside = Math.floor(1000/(Math.pow(2,(pScale-1)/2)) + 0.2) - 20;
    };
    return longside;
};

What the printerface does is give the user access to all of these variables, and change the map scale depending on them. Fortunately, international paper sizes greatly simplify this math by maintaining the same aspect ratio (√2) regardless of size. If the window size changes, the preview map can change size proportionally and still represent the printed page. In order to better represent what the printout will look like, the ratios allow for automatically adjusting the size of the symbols on the map based on the window size or chosen paper size. So if the user changes the paper size to, say, A1 (a typical poster size), the preview map looks like this:

Screenshot3_crop.png

Note that the ratio scale has increased quite a bit. Think about what this will look like when it turns into a 841 mm x 594 mm poster. The bounding box has been preserved, symbols will be the same proportions relative to each other as in the preview, and they will be the same absolute size as the symbols printed on any other page size (8 mm wide for the icons). Also note the new labels for village features. These are scripted to show up whenever the scale is greater than 1:250000. More on these in a minute.

The last tricky step to printing is how to actually resize everything so everything on the map prints at the right size with the correct bounding box. Folks, I’m here to tell you, figuring this out was no walk in the park. I may have prematurely lost some hair over it. In the end, the solution was as simple yet un-straightforward as the cheat that lets you beat Myst within the first five minutes (for you whipper-snappers, that’s a shameless 90’s computer game reference). Here’s the code in case you want to pick through it; if you just want the punch line, skip on down.

$("#printButton").click(function(){    

    //transform map pane
    var mapTransform = $("#printPreview .leaflet-map-pane").css("transform"); //get the current transform matrix
    var mmpp = adjustScale(); //get mm per css-pixel
    var multiplier = mmpp[1] * 3.7795; //multiply paper mm per css-pixel by css-pixels per mm to get zoom ratio
    var mapTransform2 = mapTransform + " scale("+ multiplier +")"; //add the scale transform
    $("#printPreview .leaflet-map-pane").css("transform", mapTransform2); //set new transformation

    //set new transform origin to capture panning
    var tfMatrix = mapTransform.split("(")[1].split(")")[0].split(", ");
    var toX = 0 - tfMatrix[4],
        toY = 0 - tfMatrix[5];
    $("#printPreview .leaflet-map-pane").css("transform-origin", toX + "px " + toY + "px");

    //determine which is long side of paper
    var sdim, ldim;
    if ($("#paperOrientation option[value=portrait]").prop("selected")){
        sdim = "width";
        ldim = "height";
    } else {
        sdim = "height";
        ldim = "width";
    };

    //store prior dimensions for reset
    var previewWidth = $("#printPreview").css("width"),
        previewHeight = $("#printPreview").css("height")

    //set the page dimensions for print
    var paperLongside = getLongside(); //paper length in mm minus 20mm total print margins minus border
    $("#printPreview").css(ldim, paperLongside + "mm");
    $("#printPreview").css(sdim, paperLongside/Math.sqrt(2) + "mm");
    $("#container").css("height", $("#printPreview").css("height"));
    
    //adjust the scale bar
    var scaleWidth = parseFloat($("#printBox .leaflet-control-scale-line").css('width').split('px')[0]);
    $("#printBox .leaflet-control-scale-line").css('width', String(scaleWidth * multiplier * 1.1) + "px");
    $("#printBox .leaflet-control-scale").css({
        'margin-bottom': String(5 * multiplier * 1.1) + "px",
        'margin-left': String(5 * multiplier * 1.1) + "px"
    });

    //adjust north arrow
    var arrowWidth = parseFloat($(".northArrow img").css('width').split("px")[0]),
        arrowMargin = parseFloat($(".northArrow").css('margin-top').split("px")[0]);
    $(".northArrow img").css({
        width: String(arrowWidth * multiplier * 1.1) + "px",
        height: String(arrowWidth * multiplier * 1.1) + "px"
    });
    $(".northArrow").css({
        "margin-right": String(arrowMargin * multiplier * 1.1),
        "margin-top": String(arrowMargin * multiplier * 1.1)
    });

    //print
    window.print();

    //reset print preview
    $("#printPreview .leaflet-map-pane").css("transform", mapTransform); //reset to original matrix transform
    $("#printPreview").css({
        width: previewWidth,
        height: previewHeight
    });
    //reset scale bar
    $("#printBox .leaflet-control-scale-line").css('width', scaleWidth+"px");
    $("#printBox .leaflet-control-scale").css({
        'margin-bottom': "",
        'margin-left': ""
    });
    //reset north arrow
    $(".northArrow img").css({
        width: arrowWidth + "px",
        height: arrowWidth + "px"
    });
    $(".northArrow").css({
        "margin-right": arrowMargin,
        "margin-top": arrowMargin
    });
});

Okay, here’s the punchline. The key is using a CSS transform to temporarily scale up the whole leaflet-map-pane div, which holds all of the layers in the print preview map. Leaflet already adjusts the symbol positions using a transform translation, and I need to preserve that transform to reset the map after it’s printed. But to print it, I need to add a scale transform that multiplies the size of everything by the ratio of paper millimeters per screen millimeter (which you get if you cross-multiply paper mm per pixel times pixels per screen mm). Once I figured this out, I had to figure out how to adjust the transform origin so the bounding box didn’t move out from under my paper map. This involved dissecting the transform matrix and turning the last two numbers negative as the x and y coordinates of the transform origin, which moves the whole map back up and to the left, where it should be (I still can’t keep straight why this works even after re-reading the above link, but I’m sure you mathy people can figure it out).

The rest of the above code just messes with the map div and the accessory elements to get them all the right print size, pulls the trigger, then sets everything back right for the screen viewer. Oh, I also have some helpful print CSS styles, which I’m not going to bother explaining:

@media print {
    @page {
        size: auto;
        margin: 10mm;
    }

    body {
        /*border: 5px solid blue;*/
    }

    #container {
        /*border: 4px solid green;*/
        position: absolute;
    }

    #cover, #maparea, #printOptions, #ppmmtest, .closeDialog, .resize, .msg_qq {
        display: none !important;
    }

    #printBox, #printPreview {
        position: absolute;
        bottom: 0;
        left: 0;
        top: 0;
        right: 0;
        /*border: 1px solid red;*/
    }

    #printPreview {
        border: 1px solid black !important;
        background-color: white !important;
    }

    #printPreview span {
        text-shadow: -1px -1px 0 #FFF, 1px -1px 0 #FFF, -1px 1px 0 #FFF, 1px 1px 0 #FFF;
    }

    .leaflet-control-scale-line {
        text-align: center;
    }
}

So yeah, that’s printing from Leaflet in a nutshell. Just to prove I’m not blowing smoke, here’s a scan of the printed version of the second screenshot of the post.

scan.png

Pretty good, huh?

Now, I promised you I would talk about labels. Leaflet and labels are like Cowboy and Octopus. After all, why would you need to put labels on a Leaflet map when they’re baked into your tiles? Well, again we come to this minor issue of printed raster tiles looking like something a dung beetle would enjoy. So I needed to figure out how to plunk labels onto my map. For the area features, which are drawn as SVG overlays, I decided the easiest thing would be to just add the labels as SVG <text> elements, placing them in the center of each feature’s bounding box. Unfortunately, I’ve found that once Leaflet draws its overlays, you can’t just put new elements into the SVG and expect them to render. So I cheated a little and brought in D3 to do the job. Because D3 is magic.

//for SVG polygons, add label to center of polygon
var g = d3.selectAll('#printPreview g'),
    scaleVals = adjustScale(),
    denominator = Math.round(scaleVals[0]/scaleVals[1]),
    areaTextSize = denominator > 500000 ? '0' : String(8/scaleVals[1]),
    labelDivBounds = [];

g.each(function(){
    var gEl = d3.select(this),
        path = gEl.select('path');
    if (path.attr('class').indexOf('|') > -1){
        var labeltext = path.attr('class').split('|')[0],
            bbox = path.node().getBBox(),
            x = bbox.x + bbox.width/2,
            y = bbox.y + bbox.height/2,
            color = path.attr('stroke'),
            textSize = x == 0 && y == 0 ? 0 : areaTextSize,
            text = gEl.append('text')
            .attr({
                x: x,
                y: y,
                'font-size': textSize,
                'text-anchor': 'middle',
                fill: color
            })
            .text(labeltext);
    };
});

One thing I want to point out here is that textSize variable. I was having a bit of trouble with a bunch of labels piling on top of each other at one particular spot on the map, because apparently once a feature is off the map, its bounding box coordinates become the negative of half the corresponding dimension of the SVG. So I just shut those labels off. There’s also a line that sets the areaTextSize to 0 if the scale is less than 1:500,000 to avoid cluttering up the map too much. There’s a similar D3 .each() loop to adjust the labels each time the map is moved or resized that I’m not showing here.

I also wanted to add labels to the village symbols on the map. But these are actually raster icons, not part of the Leaflet overlays SVG. First I played with just a straight JS loop that would use jQuery to grab the icons and plunk absolutely-positioned divs or spans on the map for each icon. This choked the browser. So then my thought was to make Leaflet do the work, creating an L.geoJson layer for all of the labels I wanted, and cooking the labels themselves with a pointToLayer function. The problem here is that the only SVG layers Leaflet creates are paths, and the only non-SVG layers Leaflet creates are icons! No text or any other elements.

So I decided to do something clever. I would trick Leaflet into putting the labels on the map by creating icons with the feature names in an alt attribute and feeding them a bad src URL! Aside from a pesky image 404 error in the Console, this worked great in Firefox. But Chrome annoyingly adds a broken image icon and cuts off the alt text; IE is a little better but still adds an X icon. So finally I decided I just had to add a label class to Leaflet. Fortunately, this wasn’t too hard. I just extended the Icon class with the bear minimum modification to create <span> elements instead of <img> elements:

L.Label = L.Icon.extend({
    _createImg: function (text, el) {
        el = el || document.createElement('span');
        el.innerHTML = text;
        return el;
    }
});

L.label = function (options) {
    return new L.Label(options);
};

Then I just had to instantiate an L.geoJson layer and feed it my label class. Voilá! I magically had village labels on the map! (At scales of over 1:250,000, again to avoid clutter).

Screenshot4_crop.png

I did end up adjusting the CSS just a little with some in-line script and a stylesheet style:

//inline script to adjust label css
$("#printPreview span").css({
    'margin-left': pointTextSize,
    'font-size': pointTextSize,
    'line-height': pointTextSize
});

//css style to create label outlines for improved readability
#printPreview span {
    text-shadow: -1px -1px 0 #FFF, 1px -1px 0 #FFF, -1px 1px 0 #FFF, 1px 1px 0 #FFF;
}

It’s not perfect, but the end result seems to be a readable printed Leaflet map. I hope this has given those of you who want to step outside of our media silos some ideas for further experimentation!

Open Web Mapping: How do we teach this stuff?

Today I gave a talk at the NACIS conference about redesigning the lab curriculum for Geography 575, UW-Madison’s Interactive Cartography and Geovisualization course, to more effectively teach the new Open Web Platform mapping tools that are now the industry standard for web mapping. People seemed to like it.

But the 20-minute (or, in my case, 23-minute…oops) conference talk/slideshow format presents a tradeoff between conveying lots of necessary information and slides that support the talk without textsploding the visual cortex. After paring down to the bare bones of my story, I was left with a number of unavoidably text-heavy slides showing our 2014 curriculum outline and comparing it to the revised sequence for the next iterations of the course. All I could do was point to and say, “don’t read this now, here are the key bits to notice…” People who didn’t see the talk can go back and read the content, but they won’t get the key take-aways I presented verbally (Why is this web mapping workflow thing so important it gets 7 slides? Why are those curriculum topics highlighted in purple?).

So, to fill in the missing pieces of the online slideshow without too much extra labor on my part, here is the outline of my talk. I am in the process of submitting a more detailed journal paper and will post the citation here if it ever gets published.

Open Web Mapping: How do we teach this stuff? NACIS 2015 talk

  1. Title slide
    1. Intro self
    2. Talk based on my experience as a curriculum planner and TA for G575
    3. 4-credit course on cartographic UI/UX design taught by Rob Roth
    4. 2 hr per week lab component
    5. Will talk about
      1. Desired learning outcomes for course
      2. Curriculum redesign to match new teaching technology stack
      3. Evaluation process
      4. What comes next
  1. Desired Learning Outcomes
    1. First two outcomes corresponded to what were initially three lab assignments: animation, sequencing and retrieve interactions, and geovisualization
      1. Eventually merged first two lab assignments into one Leaflet lab without animation requirement
    2. Third outcome informed by final project—collaborative start-to-finish work experience with real world scenario chosen by students.
    3. Fourth and fifth outcomes “soft” outcomes that come with learning tools
      1. Translate across technologies
      2. 4: necessary cognitive development to apply technologies
      3. 5: best practices for design & development informed by research
  1. Technology stack I
    1. Taught with Flash until 2012, then moved to open web mapping technologies as Flash was dumped as industry standard
    2. Flash: encapsulated IDE with seamless integration of design software and well-featured native scripting language
  1. Technology stack II
    1. Open web platform technology stack is bigger and more unwieldy
    2. Job no longer to teach students 1 or 2 pieces of software to make great maps; is to teach integration of dozens of platforms, languages, libraries, frameworks to make great maps
    3. Major unanticipated consequences for teaching techniques
    4. Didn’t put much thought into restructuring curriculum initially; kept everything but lab technologies the same
    5. Results in 2013 were very mixed
      1. Award-winning maps
      2. Many students struggled, especially with learning D3
  1. Web Mapping Workflow I
    1. Rich dissertation: idea of Web Mapping Workflow to describe process of creating a web map on Open Web Platform
    2. Ideally, Workflow should inform scope and sequence of our teaching
      1. Scope: what is taught
      2. Sequence: in what order we teach it
    3. My concept of workflow is adapted, not identical to Rich’s
    4. First step in workflow is to design web map.
      1. Mostly teach this in G572, Graphic Design for Cartography; but included as part of final project
  1. Web Mapping Workflow II
    1. Second step is to set up a development environment—akin to a workshop with hammer, saw, drill, etc.
    2. Initially treated this topic lightly, but adding more structure to it
  1. Web Mapping Workflow III
    1. Third step is to find and format data
    2. Always takes longer than expected—often most time-consuming stage
    3. Proved to be tricky for students; required more attention
  1. Web Mapping Workflow IV
    1. Fourth step is creating basic Markup that forms backbone of the web page
    2. Can be seen as the space for cartographic representation—where the actual elements of the web map exist
  1. Web Mapping Workflow V
    1. Fifth step is script
    2. Fourth and fifth steps highly integrated; dynamic markup created by the script
    3. Script is necessary for adding cartographic interaction
  1. Web Mapping Workflow VI
    1. Sixth step is fine-tuning
    2. Could include debugging, but more usability evaluation and feedback
    3. Haven’t had time to teach usability evaluation; do now emphasize debugging more
  1. Web Mapping Workflow VII
    1. Final step is Deployment
    2. This has always been included in course, but relatively minor
    3. Used to be in-house, now must be off-site
  1. 2014 Curriculum Sequence
    1. Based on 2013 experience, designed a new curriculum sequence
    2. Heavy direct instruction of key concepts first couple weeks
    3. Workflow stages shown by letters
    4. Workflow stages not taught sequentially, but progressively more integrated over time
  1. How well did it work?
    1. Two reasons for wanting to assess the course:
      1. Did we meet the desired learning outcomes?
      2. Where were the sticking points? Threshold concepts (Bampton)
    2. Used four tools:
      1. Entrance survey to find out where students were at
      2. Instructor logs for qualitative observations
      3. Student extra credit assignments narrating stumbling blocks and aha moments
      4. Exit survey—gave most of quantitative data on student learning
  1. Entrance Survey
    1. Weighted toward the bottom of familiarity with open web technologies
    2. Students rated themselves even less familiar initially in exit survey
    3. Course teaches computer science concepts to cartographers and design majors—would be a very different class if we were teaching cartography to computer scientists
  1. Instructor Logs
    1. Critical practice for reflective teaching—highly recommended
    2. Had to teach to different learning speeds for different groups of students
    3. Some difficulties we didn’t think students would experience
    4. Teaching D3 the most successful outcome of course in comparison to 2013
      1. Most likely due to teaching data first and highly structured D3 lessons
  1. Student Feedback
    1. Helped highlight some misconceptions students held, such as underestimation of time, and threshold concepts, such as object-oriented programming
    2. Online examples could be both helpful and troublesome depending on how well explained and how close they fit lab scenarios
    3. Students displayed increasing understanding both of open web concepts and of what they did not yet know
    4. Filled with evidence for new computational thinking skills (quote)
  1. Exit survey
    1. Self-reported expertise with tools we taught increased from low to moderate, statistically significant
      1. No increase in open web tools we didn’t teach
    2. Asked students to rate overall emotional experience; average increased with each assignment
      1. No very negative responses to D3 lab
  1. Learning outcomes
    1. Which of the desired learning outcomes were demonstrated?
      1. Leaflet—vast majority of students got passing grade, rated their knowledge of Leaflet higher than beginning
      2. D3—same plus positive, confidence-building experience
      3. Final projects—all passing; a few students completed on their own; some professional quality
      4. Cognitive development—demonstrated in feedback, logs, exit survey
      5. Concept integration—harder to say; students rated concept transfer highly in exit survey but relied on lecture material little
        1. Room for improvement
  1. Topic sequence I
    1. In exit survey, asked students to reorder any topics they thought were out of order in course sequence
    2. Boxplots show ranges, quartiles, medians; circles show actual topic location in sequence
  1. Topic sequence II
    1. Most topics had median close to actual location
    2. Notice “using developer tools” and “GitHub” had extremely low median compared to actual location in sequence
      1. Foundational threshold concepts—should have come earlier in sequence
  1. Topic sequence comparison I
    1. We will be re-teaching 575 in-house this Spring, online in Spring 2017
    2. My summer job was to write lab modules for the online version
    3. I reworked module order based on assessment results
    4. Will also be used as basis of topic order for this spring’s residency course lab
  1. Topic sequence comparison II
    1. This shows where some of the key threshold concepts were in 2014 iteration
    2. Each topic expanded into multiple topics with more structure
    3. Some non-helpful topics eliminated
    4. Advantage of online: students work at own pace. Disadvantage: harder to give individualized assistance or review remedial concepts
      1. Entire online curriculum more structured by necessity; need not be for residency course
  1. Topic sequence comparison III
    1. This slide shows the topics students identified as coming too late in the sequence
    2. Developer Tools moved to Module 2
    3. GitHub mostly moved to Module 1, but separated between development platform aspects of Git and GitHub.com and deployment aspects of GitHub.io
      1. Will use it as main platform for project storage, versioning, and grading
    4. No final project in online course due to limitations of collaboration over vast distances
  1. Thank you
    1. This has been a bit about our experiences redesigning our G575 lab curriculum to better teach new open web platform technologies
    2. Github URL is source for published versions of the course lab assignments
    3. Student projects
    4. Happy to take questions.
  2. Bonus slides!!! Check out my students’ work!

Connecting PostGIS to Leaflet using PHP

For a few years now, I’ve been building wikimaps that rely on a PostgreSQL/PostGIS database to store geographic data and Leaflet to display that data on a map. These two technologies have increasingly become the industry standard open-source front- and back-end web mapping tools, used together by such behemoths as Openstreetmap and CartoDB. While you can use a go-between such as Geoserver Web Feature Service (WFS) to connect the two, the most simple, flexible, and reliable way I’ve found to connect data to map is through a little PHP script that essentially formats the queries and lets PostGIS and JavaScript do all the heavy lifting (note that my opinion on this has changed since I wrote my series of tutorials on web mapping services two years ago).

It occurred to me recently that I should share my basic technique, and I did so for UW-Madison Cartography students in a short presentation as part of our Cart Lab Education Series. This blog post is essentially a transcription of that tutorial. It assumes you have already installed Postgresql with the PostGIS extension and pgAdminIII GUI (I highly recommend installing all three through the Stack Builder), and possess a working understanding of SQL queries, HTML, JavaScript, and Leaflet.js. I will gently introduce some PHP; this shouldn’t be too painful if you already have a bit of background in JS.

Let’s get started, shall we?

I have provided the tutorial sample code on GitHub. A colleague just introduced me to the wonders of Adobe Brackets, so let’s use it to take a look at the directory tree first:

Directory Tree

As you can see, I’ve provided a data folder with a complete shapefile of some example data I had lying around. This open-access data is frac sand mines and facilities in western Wisconsin, and comes from the Wisconsin Center for Investigative Journalism. The first step is getting the data into a PostGIS-enabled database using pgAdminIII’s PostGIS Shapefile and DBF Loader (enabling this plug-in is slightly tricky; I recommend these instructions). After you have created or connected to your PostGIS database, select the loader plug-in from the pgAdminIII Plugins menu. Click “Add File”, navigate to the data directory, and select the shapefile. Make sure you change the number under the SRID column from 0 to 26916, the EPSG code for a UTM Zone 16N projection. PostGIS will require this projection information to perform spatial queries on the data. Once you have changed this number, click “Import”.

PostGIS Shapefile and DBF Loader

With your table created, we can now move to the fun part—code! For formatting clarity, I have only included screenshots of the code below, and will issue a reminder that the real deal is posted on GitHub here. I’ll only briefly touch on the index.html file and style.css files. Within index.html are links to the jQuery, jQuery-ui, and Leaflet libraries. I am mainly using jQuery to facilitate easy AJAX calls and jQuery-ui to create autocomplete menus for one of the form input text boxes. Leaflet of course makes the map. There are two divs in the body, one for the map and one for a simple form. The most useful thing to point out here is the name attributes of the text input elements, which will become important for use in constructing the SQL queries to the database.

html snippit

Style.css contains basic styles for placing the map and form side-by-side on the page, and bears no further mention.

main.js snippet

Turning to main.js (above), I have defined three global variables. The first, map, is for the Leaflet map. The second, fields, is an array of field names corresponding to some of the many attribute fields in my fracsandsites table in the database; this is the attribute data I want to see in the pop-ups on the map (other fields may be added). The third variable, autocomplete, is an empty array that will hold feature names retrieved from the database for use in the autocomplete list.

The screenshot above shows the first two functions defined after the global variables, with a $(document).ready call to the initialize function. This function sets the map height based on the browser’s window height, then creates a basic Leaflet map centered on Wisconsin with a simple Acetate tileset for the basemap. It then issues a call to the getData function. Here’s where the fun really begins.

The jQuery.ajax method is a very simple substitute for a whole lot of ugly XMLHttpRequest native code. It can take data as a string of parameters in URI scheme or as a JavaScript object; I’m using the latter because it is neater. You can include any parameters, but the important part is to think about what you need out of the DOM to create the SQL query that’s going to grab your data. I’m designating the table name and the fields here, although you could also hard-code both in the PHP if you don’t need them to be dynamic.

OK, let’s flip over and see what’s going on in getData.php…

php snippet

If you’re not used to seeing PHP code, some things here may look a bit odd. The first two lines declare that what follows is php code for the interpreter and enable some feedback on any I/O errors that occur. PHP is very picky about requiring semicolons at the end of each statement that isn’t a control structure (open or closing curly brace), and a syntax error will cause the whole thing to fail silently despite line 2. Lines 5-9 assign the database credentials to variables, which are denoted with the dollar sign (unlike JS, there is no var keyword equivalent). Make sure to change these to your own database credentials. On line 11, the $conn variable is assigned a pg_connect object, which connects to the database using the parameters provided above. Note that in PHP, there is a difference between double and single quotes: both denote a string, but when using double quotes you can put variables directly into the string without concatenation and they will be recognized as variables by the interpreter, rather than as string literals. The following if statement tests the integrity of the connection and quits with an error if it fails.

One important thing to note here is that for this to work, you must already have PHP installed and enable the php_pgsql extension by uncommenting it in your php.ini file, which is stored in your PHP directory (probably somewhere in Program Files if you’re on a PC). You can get PHP here.

Lines 18 and 19 retrieve the data sent over from the $.ajax method in the JS. $_GET is a special designated variable in PHP that is an array of parameters and associated values submitted to the server with a GET header (there is also one for the POST header). In PHP, an array is analogous to both an object and an array in JavaScript; it’s just that the latter form uses zero-based sequential integers as keys. In this case, we can think of the $_GET array as just like the AJAX data object, with the exact same keys and values (table with the string value "fracsandsites" and fields with its array of string values). Line 18 assigns the first to a new PHP $table variable and line 19 assigns the second to a $fields variable.

Since $fields is another array, to use it in a SQL query its values must be joined as comma-separated values in one string. The foreach loop on line 23 does this, assigning each array index to the variable $i and each value to the variable $field. Within the loop, each variable is concatenated to the $fieldstr variable (the . is PHP’s concatenation operator), preceded by l. because the SQL statement will assign the alias l to the table name (why will become clear later).

After all fields have been concatenated, a final piece is added to the $fieldstr: ST_AsGeoJSON(ST_Transform(l.geom,4326)). This is the first bit of code we’ve seen that is specifically meant for PostGIS. We want to extract the geometry for each feature in the table in a form that’s usable to Leaflet, and that form is GeoJSON. Fortunately for us—and what makes PostGIS so easy to use for this purpose—PostGIS has a native method to translate geometry objects stored in the database into GeoJSON-formatted strings. ST_AsGeoJSON can simply take the geometry column name as its parameter, but in order for the data to work on a Leaflet map, it has to be transformed into the WGS84 coordinate reference system (unprojected lat/long coordinates). For this purpose, PostGIS gives us ST_Transform, which takes the geometry column name and the SRID of the CRS into which we want to transform it (In this case, the familiar-to-web-mappers 4326).

At this point, we now have all of the components of our first SQL query (line 31). If you were to print (or echo in PHP parlance) the whole thing without the variables, you would see

$sql = "SELECT l.gid, l.createdby, l.featname, l.feattype, l.status, l.acres, ST_AsGeoJSON(ST_Transform(l.geom,4326)) FROM fracsandsites l";

And, in fact, if you copied everything inside the quotes into the SQL editor in pgAdminIII, you would get a solid response of those attributes from all features in the table. Go ahead and do it. DO IT NOW!

sql editor output

For now, I’m going to skip the next few lines (we’ll come back to them later) and wrap up my PHP with this:

PHP snippet

Line 45 tests for a response from the database, but also sends the query to the server using the pg_query method and assigns the response to the variable $response. The while loop on lines 51-56 retrieves each table row from the $response object (note: this is emphatically not an array; hence the use of the pg_fetch_row method) and echoes each attribute value, with the attribute values separated by comma-spaces and the rows separated by semicolons. As previously mentioned, PHP’s echo command “prints” data, in this case by sending it back to the browser in the XMLHttpRequest response object.

At this point we can go back to the browser and look at what we have. If you’re using Firebug, by default it will log all AJAX calls in the console, and you can see the response once it’s received. You should be able to see something like this:

Response in the console

Now all we have to do is process this data through a bit of JavaScript and stick it on the map. Easy-peasy. I’ll start with the first part of the mapData callback function:

js snippet

Lines 39-44 remove any existing layers from the Leaflet map, which isn’t really necessary at this stage but will become useful later when we implement dynamic queries using the HTML input form. For now, skip down to Line 47 and notice that we are starting to build ourselves a GeoJSON object from scratch. This is really the easiest way to get this feature data into Leaflet. If you need to be reminded of the exact formatting, open any GeoJSON file in a text editor, or start making one in geojson.io. Once we have a shell of a GeoJSON with an empty features array, the next step is to go ahead and split up the rows of data using the trailing comma-space and semicolon used in getData.php to designate the end of each row. Since these are also hanging onto the end of the last row, once the data is split into an array we need to pop off the last value of the array, which is an empty string. Now, if you console.log the dataArray, you should see:

dataArray in console

Now, for each row, we need to correctly format the data as a GeoJSON feature:

js snippet

Each value of the dataArray is split by the comma-spaces into its own array of attribute values and geometry. We create the GeoJSON feature object. The geometry is in the last value in the feature array (d), which we access using the length of the fields array since that array is one value shorter than d and therefore its length matches the last index of d. properties is assigned an empty object, which is subsequently filled with attribute names and values by the loop on lines 69-71. The if statement on lines 74-76 tests whether the feature name is in the autocomplete array, and if not, adds it to the autocomplete array. Finally, the new feature is pushed into the GeoJSON features array. Lines 82-84 activate the autocomplete list on the text input for the feature name in the query form. If you were to print the GeoJSON to the console and examine it in the DOM tab, you should see:

the geojson in the DOM tab

Now that we have our GeoJSON put together, we can go ahead and use L.geoJson to stick it on the map.

js snippet

I won’t go through all of this because it should be familiar code to anyone who has created GeoJSON overlays with Leaflet before. If you’re unfamiliar, I recommend starting with the Using GeoJSON with Leaflet tutorial.

This gets us through bringing the data from the database table to the initial map view. But what’s exciting about this approach is how dynamic and user-interactive you can make it. To give you just a small taste of what’s possible, I’ve included the simplest of web forms with which a user can build a query. If you’re at all familiar with SQL queries through database software, ArcMap, etc. (and you should be if you’ve gotten this far in this tutorial), you know how powerful and flexible they can be. When you’re designing your own apps, think deeply about how to harness this power through interface components that the most novice of users can understand. As a developer, you gain power through giving it to users.

As previously mentioned, the form element in the index.html file contains two text inputs with unique name attributes. The first of these is designated for distance (in kilometers), and the second is for the name of an anchor feature. We will use these values to perform a simple buffer operation in PostGIS, finding all features within the specified distance of the anchor feature. Ready to go? OK.

In index.html, the value of the form’s action attribute is "javascript:submitQuery()". This calls the submitQuery function in main.js. Here is that function:

js snippet

We use jQuery’s serializeArray method to get the values from the form inputs. This returns an array of objects, each of which contains the name and value of one input. Then, instead of creating the data object inline with the AJAX data key, we create it as a variable so we can add the serialized key-value pairs to it. This is done through the forEach loop, which takes each object in the formdata array and assigns the name value as a data key and the value value as a data value. Get it? Good. (If not, just console.log the data object after the loop).

With the data object put together, it’s time to issue a new $.ajax call to getData.php. Let’s flip back over and take another look at that. Everything is the same except now we have a few more $_GET parameters to deal with and a different query task. Hence the if statement on lines 34-40:

php snippet

The if statement tests for the presence of the featname parameter in the list of parameters sent through AJAX. If it exists, that parameter’s value gets assigned to the $featname variable and the distance parameter value, multiplied by 1000 to convert kilometers to meters, gets assigned to the $distance variable.

Now for the hard part. Remember our simple SQL statement in which we gave the table and all of its attributes an alias (l) for no apparent reason? Well, the reason is that we now have to concatenate SQL code for a table join onto it. Whenever you do a join in PostgreSQL, each table on either “side” of the join needs its own alias. Since the initial table reference is on the left side of the JOIN operator, I assigned the original table the alias l, for left, and the joined table r, for right. Obvious, huh? Well, maybe not. In any case, the principle is that although both sides of the join reference the same table, Postgres will look at them like they are different tables. This is a LEFT JOIN, meaning that the output will come from the table on the left, and the table on the right is used for comparison.

There are two parts to the comparison here: the ON clause and the WHERE clause. The ST_DWithin statement following ON specifies that output from the left table will be rows (features) within the user-given distance of rows (features) from the right table; since our table is stored in a UTM projection, the distance units will be meters (if it were stored as another CRS, say WGS84, we would have to use ST_Transform on each table’s geometry for it to work). The WHERE clause narrows the right-hand comparison to a single feature: the one named by the user in the input form. Translating to English, you could read this as, “Give me the specified attribute values and geometry for all of the features in the left table within my specified distance of the feature I named in the right table.” Or something like that.

OK, that’s the biggest headache of the whole demo, and it’s over. The features that get returned from this query now go back to the mapData function in main.js. The map.eachLayer loop that removes existing layers from the map now has a purpose: get rid of the original features so only the returned features are shown. The new features are plunked into a new homemade GeoJSON and onto the map through L.geoJson. Here’s an example using a query for all sites within 10 km of the Chippewa Sands Company Processing Plant:

screenshot of query results

That’s it. There’s lots more you should learn about data security (particularly with web forms), PDO Objects, error prevention and debugging, etc before going live with your first app. But if you’ve gotten through this entire tutorial, congratulations—you’re on your way to designing killer user-friendly database-centered web maps.

Leaflet Draw: Implementing Custom Tools

As software developers, I think we can get caught up in the power and elegance of our own creations and fail to consider the importance of explaining their inner workings in a way that is understandable to those who were not intimately involved in their creation. Another way of saying this: teaching is hard. We don’t always know what we know. This has been on my mind this week as I’ve found myself struggling to learn a number of very useful but not-very-well-documented tools by a popular web cartography startup.

Over the past few months, I have been working on an interesting project: building a Leaflet-based wikimap of herding routes in eastern Senegal for use by academics, NGOs, and government officials examining land use conflicts between farmers and herders in the area. Because the users will not be computer experts, I am particularly concerned about not making skill-based assumptions and am trying to carefully think through the interface and how to make it as simple and novice-friendly as possible, while also providing a powerful suite of analysis tools.

A screenshot of my latest wikimap project, showing my custom tools interface.
A screenshot of my latest wikimap project, showing my custom tools interface.

Leaflet Draw tools

Some of these are measurement tools. The best out-of-the-box tools for drawing vectors and measuring lengths and areas on a Leaflet map are included in the Leaflet Draw library. This library has become the standard drawing plug-in for Leaflet, used for such apps as geojson.io, and for good reason: it’s lightweight, elegant, and functionally versatile. Unfortunately for me, all of the documentation in the README is geared toward using the toolbar that is integrated into the library (left). What to do if I need to design my own toolbar? While developers at Mapbox and CartoDB are super good at reverse-engineering and editing tools for their own needs, I’m still at a API-reading and Google-for-examples skill level. Plus I didn’t really want to modify the library itself to fit my needs; I just wanted to make use of its internal structure in a way that didn’t require the vertical toolbar.

There IS a way to do this, but it took me several hours to figure out, mainly because tapping into the library’s innards isn’t covered in the API.  I’ll paste the code I came up with below, then walk through it. First, for measuring distances:

function measure(){
  var stopclick = false; //to prevent more than one click listener

  var polyline = new L.Draw.Polyline(map, {
    shapeOptions: {
      color: '#FFF'
    }
  });
  polyline.enable();

  //user affordance
  $("button[name=measure] span").html(messages.beginmeasure);
	
  //listeners active during drawing
  function measuremove(){
    $("button[name=measure] span").html(messages.distance + polyline._getMeasurementString());
  };
  function measurestart(){
    if (stopclick == false){
      stopclick = true;
      $("button[name=measure] span").html(messages.distance);
      map.on("mousemove", measuremove);
    };
  };
  function measurestop(){
    //reset button
    $("button[name=measure] span").html(messages.measure);
    //remove listeners
    map.off("click", measurestart);
    map.off("mousemove", measuremove);
    map.off("draw:drawstop", measurestop);
  };

  map.on("click", measurestart);
  map.on("draw:drawstop", measurestop);
}

First, I should point out that the measure() function is called by the onclick attribute of the “Measure distance” <button>. Immediately, I define a boolean variable, stopclick, which will be used to ensure that the following code won’t set duplicate event listeners, which can get messy. Then, to start drawing the measure line, I use:

var polyline = new L.Draw.Polyline(map, {
  shapeOptions: {
    color: '#FFF'
  }
});
polyline.enable();

That’s it. Just a new object instance and a call to the .enable() method (although it isn’t shown in the API) starts up the drawing tool. But it’s not really obvious to the user what to do next, and I also want to display the distance inside the interface button to make it extra easy to see. So the first thing to do is tell the user how to use the draw tool:

//user affordance
$("button[name=measure] span").html(messages.beginmeasure);

Here I’m referencing a separate object, messages, that holds every word of human language that appears anywhere in the interface. In this case, messages.beginmeasure holds the string "Click map to begin". I do this because the map must be bilingual, and storing my interface strings in a separate object with a copy of each in French (the official language of Senegal) and one in English will facilitate switching between the languages.

Next, I have three event listener handlers:

//listeners active during drawing
function measuremove(){
  $("button[name=measure] span").html(messages.distance + polyline._getMeasurementString());
};
function measurestart(){
  if (stopclick == false){
    stopclick = true;
    $("button[name=measure] span").html(messages.distance);
    map.on("mousemove", measuremove);
  };
};
function measurestop(){
  //reset button
  $("button[name=measure] span").html(messages.measure);
  //remove listeners
  map.off("click", measurestart);
  map.off("mousemove", measuremove);
  map.off("draw:drawstop", measurestop);
};

The first handler, measuremove(), simply updates the button contents with the measurement string constantly as the user moves their cursor across the map. Notice I had to use an internal function, _getMeasurementString() to get at this info, which is unfortunate. The library really should include a simple getMeasurement() method available and published in the API. But it doesn’t. Oh well.

The next handler, measurestart(), makes sure that we’re not trying to pull measurement data before the user starts drawing, because that gets messy and starts throwing errors in the console. I have to use a click listener on the map to trigger this, but only want to trigger it once instead of each time the user clicks on the map. Hence stopclick. The handler only executes its code if stopclick hasn’t been altered from false to true yet, and within that code it sets stopclick to true. It’s going to change the contents of the button to say "Distance: " and apply a mousemove listener with the measuremove() handler discussed above.

Finally, we have the measurestop handler. This is going to reset the button contents to the original "Measure distance" string, then remove the three event listeners added within the measure() function so we don’t place any duplicate listeners.

Then finally:

map.on("click", measurestart);
map.on("draw:drawstop", measurestop);

The listeners above should be fairly self-explanatory: when to start measuring (when the user clicks on the map) and when to stop. Ah! With the second listener, we finally get to use something that’s actually specified in the API: the "draw:drawstop" map event. Well, okay, the initial Polyline options are listed in the API too. But not a lot else.

I got all done making this linear measure go, then decided: why stop there? Wouldn’t it be nice for my users to be able to measure areas too? Like, the area of a farmer’s field or the size of a village, perhaps? So I did one for area too:

function measureArea(){
  var stopclick = false; //to prevent more than one click listener

  var polygon = new L.Draw.Polygon(map, {
    showArea: true,
    allowIntersection: false,
    shapeOptions: {
      color: '#FFF'
    }
  });
  polygon.enable();

  //user affordance
  $("button[name=measureArea] span").html(messages.beginmeasure);
	
  //listeners active during drawing
  function measuremove(){
    $("button[name=measureArea] span").html(messages.area + polygon._getMeasurementString());
  };
  function measurestart(){
    if (stopclick == false){
      stopclick = true;
      $("button[name=measureArea] span").html(messages.area);
      map.on("mousemove", measuremove);
    };
  };
  function measurestop(){
    //reset button
    $("button[name=measureArea] span").html(messages.measureArea);
    //remove listeners
    map.off("click", measurestart);
    map.off("mousemove", measuremove);
    map.off("draw:drawstop", measurestop);
  };

  map.on("click", measurestart);
  map.on("draw:drawstop", measurestop);
};

This one’s very similar to the first one, except for the options and a few different messages. One important thing with the options that, again, the API doesn’t tell you is that if you want the Polygon tool to show the area measurement, you have to set allowIntersection to false. That took digging through the source code to figure out.

I hope my multiple hours of trial and error can help prevent the same headache for someone else. And, ideally, I hope these holes in the Leaflet Draw documentation get filled. It really is a powerful set of tools. Happy playing!

Measure Area
Leaflet Draw custom area measurement tool

D3 Selection Mysteries… Solved?

Edit: It was brought to my attention that I should acknowledge the excellent resource for learning D3 provided by Scott Murray in his book Interactive Data Visualization for the Web and in his AlignedLeft blog tutorials. I can’t say enough about how well-written and beginner-friendly the book and tutorials are. This post is simply meant to extend the understanding I got from reading Murray’s blog post on Binding Data, which I highly recommend reading first.

The selectAll-data-enter sequence used to create multiple selections in D3 has always been a bit mysterious to me. I even went so far as to call .enter() “one of the great mysteries of the universe” while teaching it. But it’s really not all that mysterious. I don’t know why I didn’t think to do this until now, but a simple series of console.log statements can reveal the stages of multiple selection creation and data binding.

1. Empty selection:

var provinces = map.selectAll(".provinces");
console.log(provinces);

screenshot1

Here we have an empty selection. Note that it is simply an empty set of nested arrays. I’m still not sure why this is a nested array instead of a single-layer array, but I’m sure there’s a good reason. In any case, the inner array appears to be what’s important.

2. Adding the Data:

var provinces = map.selectAll(".provinces")
    .data(topojson.feature(europe, europe.objects.FranceProvinces).features);
console.log(provinces);

screenshot2

When an array of data is added to the selection, a number of blank slots are created in the array that matches the number of elements in the data array.

3. Binding the Data:

var provinces = map.selectAll(".provinces")
    .data(topojson.feature(europe, europe.objects.FranceProvinces).features)
    .enter();
console.log(provinces);

screenshot3

What do each of those data objects look like?

screenshot4

Enter binds the data to the selection, so that each element in the selection array is now an object holding its datum within the __data__ property.

4. Appending an Element:

var provinces = map.selectAll(".provinces")
    .data(topojson.feature(europe, europe.objects.FranceProvinces).features)
    .enter()
    .append("g")
    .attr("class", "provinces");
console.log(provinces);

screenshot5

So what’s in each <g> element?

screenshot6

Notice how the datum (__data__) has been attached as a property of the element. This is what gets passed every time a method further down in the block calls an anonymous function with a d parameter.

Smell that? That smells like… understanding.

Bonus! Here’s the video of Mike Bostock’s awesome keynote at FOSS4G2014 back in September: